« Back to Results

The Computerization of Economics: Computers, Programming, and the Internet in the History of Economics

Paper Session

Saturday, Jan. 8, 2022 12:15 PM - 2:15 PM (EST)

Hosted By: History of Economics Society
  • Chair: Cléo Chassonnery-Zaïgouche, University of Cambridge

The Propagation and Consolidation of Technical Knowledge through Web Forums: The Statalist Case

Pierrick Dechaux
,
Network in Epistemology and History of Thought

Abstract

The paper investigates the role of specialized web forums in the development and
dissemination of econometric softwares. We focus on the history of Statalist, a web forum
related to Stata, one of the most widespread software for econometrics. Statalist was
established in 2004: since, it became the most popular virtual place where Stata users share
their questions and answers about the use of the software. Moreover, we argue, the
emergence of Statalist is characterized as the emergence of a web-based community of Stata
users, with distinctive actors and purpose. Notably, the paper sheds light on some “hidden
figures” (i.e. Statalist’s moderators and main contributors, [such as Nick Cox]). We highlight
how the Statalist community is the vector for sharing and elaborating a specific type of
economic knowledge about the software uses (methods, applications, coding, packages’
choices, debugging, etc.). Finally, we observe how such knowledge acts and interacts on
the development of Stata as a software. To establish these results, the paper combines
different historical sources. Besides literature and technical documentation, the paper
relies on a quantitative analysis of Statalist posts and on oral history (interviews with key
figures in the Statalist community).

From Computors to Computers: The EDSAC and Cambridge Microeconometricians

Chung-Tang Cheng
,
National Taipei University

Abstract

The paper argues that the Electronic Delay Storage Automatic Calculators (EDSAC) at the University of Cambridge plays a pivotal role in the development of microeconometrics at the Department of Applied Economics (DAE) under the coordination of Richard Stone, the department’s founding director. Since founded in 1946, the DAE had relied on two ways of econometric computation: human computors, who did the computing work through desk calculators, and the ‘regression analyzer’, an analogue machine invented by Guy Orcutt, DAE’s former affiliate. After the EDSAC was put in operation in 1949, a new group of econometricians, including Sibert Prais, Hendrick Houthakker, Michael Farrell, and Alan Brown, became the very first group of programmers of the EDSAC. With significant improvements to the microdata sources and technology, econometric practices by the DAE affiliates boomed in the early 1950s, constituting the first series of collective contributions to post-war microeconometrics. These contributions made the DAE one of the top-tier research institutions in econometrics in the mid-1950s. The Cambridge experience is one of the most successful examples of econometric collaborations in the early years of computer-based calculation. Since then, econometric practices had turned into a new division of labour that focuses more on teamwork and programming expertise. The emergence of the EDSAC not only brings massive computing power to the DAE, but also signals the turning point that econometric knowledge can hardly be produced individually.

The Need for Speed: Electronic Computers in Business Forecasting at Mid-Century

Laetitia Lenel
,
Humboldt University of Berlin

Abstract

For the greater part of the twentieth century, business forecasting has been a struggle against time. Forecasters needed reliable, comprehensive and up-to-date data that would allow them to extrapolate trends for the months ahead. But the collection and analysis of economic data was a laborious process and became even more so as the mass of statistical data grew. Focusing on the history of the Leading Indicators developed at the National Bureau of Economic Research (NBER), this paper investigates how the introduction of large-scale computers in the early 1950s altered the field of business cycle research and forecasting. While the advent of large-scale electronic computers promised to solve the problem of time, making it, for the first time, possible to process economic data within minutes, electronic computers brought about new problems of time. They required, first, complex infrastructures which involved time-consuming and tedious organizational processes. Second, until the development of the microprocessor, computer time was scarce. Both factors pushed business cycle researchers to transform time series analysis into a highly standardized process relegated almost entirely to the electronic computer. While this did indeed allow for an unprecedented acceleration in data processing, it also limited the possibilities of exercising trained judgment. Driven by the notion of the business cycle, the researchers had, until then, relied on judgment to correct economic data for those components that did not fit the patterns they had identified to represent cyclical change. Based on the analysis of past data, they had carved out average cyclical patterns for each time series against which they had matched and “corrected” current data. This application of judgment was obviously impossible for an electronic computer. As this paper argues, this ultimately challenged the researchers’ concepts of economic change and called into question the very project of forecasting itself.

A Tale of Two Laboratories: The Role of Computers in the Emergence of Experimental Economics

Andrej Svorenčik
,
University of Mannheim

Abstract

The history of experimental economics is the history of creation of new sites for collecting controlled economic observations — economics laboratories. As these sites transitioned from ad-hoc to a purpose-built facilities, computers became their most important feature. Two different, and hence competing, technologies were used for connecting computers and the experimental subjects sitting behind them. The difference between these technologies is exemplified by the laboratories of two leading figures in experimental economics. Vernon Smith’s Economic Science Laboratory at the University of Arizona, established in 1985, used touch-screen terminals connected to a time-sharing mainframe computer. In contrast, Charles Plott’s Laboratory for Experimental Economics and Political Science at Caltech, founded in 1986, used locally networked personal computers. Each technological solution had different implications on the laboratory infrastructure, the types of experiments it could easily program and implement, but also the interaction of experimental subjects. Eventually, local networks became the standard and EEPS became the paragon for subsequent laboratories. Second, the growth in the number of computerized laboratories grew with the general trend of exponential increases in processing power and transistor density, standardization of networking technology accompanied by falling costs. While such developments explain the spread of computers at universities, they do not satisfactorily explain why computers begun to be widely used in economic experiments. Computerized experiments provided many advantages over pen-and-paper experiments. On an operational level they essentially substituted for the experimentalist’s labor. Furthermore, by relying on an explicit routine or procedure in the form of computer code, the experimenters’ experience based and, often, tacit skills of running experiments were displaced. On a conceptual level, computerization provided experimentalists with the transformative power to design not only any existing institution but also completely new ones. Eventually many such institutions left the confines of the laboratory and were implemented beyond them.

Discussant(s)
Marcel Boumans
,
University of Utrecht
Beatrice Cherrier
,
CNRS and ENSAE/Ecole Polytechnique
JEL Classifications
  • B2 - History of Economic Thought since 1925
  • B4 - Economic Methodology