This article was written on 26 Aug 2013, and is filled under Science / Technology.

Lively Artifacts

Ross Ashby With his Ashby Box and Grandfather Clock

Ross Ashby with Ashby Box and Grandfather Clock
Photo courtesy Daniel Babcock

Last month’s New York Times article, “Apps that Know What You Want Before You Do,” references “predictive searching” as the devil of our future relationship with technology.  And yet, the distress of that future reminds me of the technological dilemmas of our past, and our continued need to come to grips with how we view technology.

Cybernetic research in the 1960s and early 1970s faced a similar technological dilemma. For one thing, the increasing acknowledgement that human beings share specific organizational principles with biological and technological systems had blurred the boundaries between subjects and objects. For another, cyberneticians such as Heinz von Foerster and the members of his Biological Computer Laboratory (BCL) were growing increasingly worried that automated technologies of future societies could threaten individual liberty and constrain human creativity.  These cyberneticians felt the need to take action, for, as Foerster famously put it: “If we don‘t act ourselves, we shall be acted upon” (Foerster 1974).

In attempting to address this dilemma, cyberneticians constructed electromechanical prototypes in order to explore the potential of biological design, including the impact of biological design on the relation between human beings and machines. In a more contemporary context of science studies and media theory, this endeavor would be alternately interpreted as a criticism of modern anthropocentrism and the “subjectless philosophy of mind” (Dupuy 2001), or an example of a “nonmodern ontology” (Pickering 2010). The fact that these cyberneticians succeeded in building machines that appear to behave autonomously, thus imitating human traits, explodes classic beliefs in the singularity of human nature, renewing the call for a non-dualistic, subjectless worldview.  The interpretation of these efforts as an example of nonmodern ontology, in turn, intimates that cybernetics is a nonmodern science.

Although cybernetics’ physical constructions certainly raise questions about the human condition and any need to draw distinctions between subjects and objects, I object to the retroactive interpretation of cybernetics as a nonmodern science.  Doing so neglects to consider the historical context of cybernetics itself.  Further, such an interpretation deliberately misidentifies the underlying motives of the cybernetic project, especially the particular effort to produce electromechanical prototypes.

To argue against any interpretation of cybernetics as a nonmodern ontology, I first extend an argument Katherine Hayles makes concerning Norbert Wiener’s work (Hayles 1999).  Hayles argues that Wiener did not simply propose that human behavior was, in principle, machinelike and automatable. What Hayles tells us is that Wiener wanted to fashion “human and machine alike in the image of an autonomous, self-directed individual,” meaning that Wiener understood “cybernetics as a means to extend liberal humanism, not subvert it” (ibid. 7). We can still find traces of a humanist and in fact very modern agenda at the core of the “Second Wave of Cybernetics” (ibid., 10). This means that, rather than challenging man‘s privileged position among animals and machines, the cyberneticians of this period were trying to affirm and stabilize human existence by constructing artifacts that mirrored the humans’ own autonomous and creative behavior.

Beyond their efforts to affirm and stabilize human existence, BCL engineers were well aware of the fact that any interpretation of a machine’s behavior as “autonomous” or “creative” largely depends on the perspective of the machine’s observer and creators. In other words, rather than trying to engage machines immediately and non-hierarchically, cybernetic models were built to be perceived as objects that behave like subjects. Importantly, the ability to perceive this behavior presupposes a clear distinction between the perceiving/knowing system and its (machine) environment. Given the limitations to any human perspective involved in such a man-machine interaction, cybernetic engineers forwarded the idea that an isolated subject cannot overcome its epistemological standpoint (i.e. point of view). In other words, the subject must interpret machine-behavior as “intelligent”, “self organizing” or “autonomous” on the basis of his or her limited, perspectival, knowledge. Given this, one should rather think of these cybernetic models as a material form of knowledge-critique rather than an attempt to pave the way for an immediate encounter with the material world.

Having laid out my objections to a retroactive interpretation of cybernetics as a nonmodern science, I propose – borrowing a term from Warren McCulloch (McCulloch 1965) – that we refer to these cybernetic machines as lively artifacts. The moniker captures the essential nature of these machines by addressing three different qualifications for these cybernetic machines. First, they are lively in the sense of “lifelike.” BCL engineers wanted to apply biological organizational and structural principles to the design of their machines and thus tried to model their artifacts on “natural prototypes.” For example, if they were planning to design “artificial neurons,” their first step would be to look at “nature,” which usually meant catching up with the latest findings in neurophysiology and bio-chemistry. Second, the term “artifact” references the fact that these machines are still objects made by human engineers. Consequently, the machine’s designs involved a great deal of tinkering and even, often enough, a hint of trickery.

(Note that there is a certain disparity between this second artifactual element and the aforementioned idea of seamlessly transfer natural solutions to Electrical Engineering. BCL Engineers frequently struggled to balance bottom-up and top-down approaches, and reflected on their own involvement in the process of creating “biological” or “bionic” machines (Mueggenburg 2011).)

However, it is the third qualification that lively artifacts addresses that makes it the best term for these machines. Lively, as a synonym for “spirited” and “active,” infers that these machines appear to exhibit a life of their own. Whereas the first two qualifications focus on design and experimentation processes of cybernetic machines, liveliness speaks to the performativity and aesthetics of these machines. A machine’s liveliness describes the nature of the interaction between the finished artifact and an individual, an individual who can be, but does not necessarily need to be, the machine’s designer. When analyzing the liveliness of cybernetic artifacts, we have to deal with the trinity of the object, the creator of the object, and the subject interacting with the object.

To illustrate further how the term lively artifacts aptly describes these cybernetic machines – specifically the third qualification of liveliness – I offer two examples: the Ashby Box and the Grandfather Clock. Both of these machines are described by von Foerster in an unpublished interview with his student Paul Schroeder (Foerster/Schroeder/Galuszka 1997). Both machines were designed by the Ross Ashby, a neuropsychiatrist who, when it came to designing new machine concepts that integrated systems theory and cybernetic machine design, was a key figures at the BCL. The first machine was built as a didactic model for Ashby’s seminars on cybernetics.  The Ashby Box, as this device was called by BCL students, was a small metal box with two switches and two little lamps. These inputs and outputs gave the box four variables with two states each, yielding sixteen different combinations of lever positions and lamps turned on or off.

Ashby asked his students to determine the input/output-relation of the device. In other words, which position of the levers turned the lamps on or off? The homework assignment proved impossible to solve. After every operation of the switches, the Ashby Box changed its transfer function according to a predefined set of rules unknown to the students. Unless the student opened the box and studied its inner structure, the device was, at least in principle, not analyzable. Seen from the perspective of the students, the Ashby Box resisted its operation.  In truth, the box resisted its operation as long as the students continued to expect it to function like a conventional, albeit complicated, switch. In this sense, the Ashby Box could behave in unexpected and surprising ways, since it rarely, and even then, only improbably, carried out the instructions its users gave it, such as “Turn on the right lamp when I put lever One in its on-Position and lever Two in its off-position.”

However, Ashby did not build his self-organizing machines simply to fluster and confound his students. While Ashby’s second machine, the Grandfather Clock, accomplished this as well, it gave Ashby greater interaction with his creation. The cybernetic machine consisted of two racks with five rows of five lamps each. The front of each lamp was fitted with a small translucent disc containing four differently colored sectors. Each disc was, in turn, equipped with a small motor allowing the disc to rotate in 90 degree-increments. As a result, the lamp would glow in a different color after each rotation. In a somewhat similar fashion as the Ashby Box, each rotation of a disc depended on a simple but incomprehensible function involving its current position in relation to the position of the 49 other discs.

Many years later, Foerster recounted in an interview that Ashby used to sit in front of his Grandfather Clock for hours and hours, Allegedly Ashby called the self-organizing machine his “inspirational device” (ibid.), attentively watching how the machine would constantly produce new patterns of color.  Even though its inner design was relatively simple and well known to Ashby, he was fascinated by the fact, that such a “primitive … organization can produce a complex appearance, apparition, manifestation” and that he sought to “understand the primitivity of the manifestation which disguises itself in complexity” (ibid.).

The Ashby Box and the Grandfather Clock are both great examples of the liveliness to these machines.  Because they appear to behave autonomously and cannot be controlled by their operators, they obtain a kind of “lifelike” quality. That liveliness, however, was a function of the knowledge of the subject interacting with the artifacts. In the case of the Ashby Box, the operator interprets the behavior of the machine as lively precisely because of his or her limited knowledge about its inner structure. Ashby’s Grandfather Clock takes the liveliness of the Ashby Box a step further, its own liveliness a result of the difficulty the user encounters comprehending the relation between its simple inner structure and its complex user interface. In each case, Ashby created a sufficiently complicated machine design, one that exploited the user’s – including Ashby himself – limited perspective when interacting with the machine. In this way, even with only one subject and one object involved, we must consider that aforementioned trinity that generates the appearance of liveliness.

With both examples of machines, they were built to reflect human autonomy and the limits (and limitations) of knowledge. Consequently, their construction and exploration was not an attempt to leave the subject-object paradigm behind by identifying means to interface with the material world on a more equal level. Rather, lively artifacts were built to mirror and affirm human freedom. Gordon Pask, a short-term member of BCL, put it best: “Broadly, the contention is that man, as a self-organizing system, should live in a man-made environment which is also a self-organizing system and which is in this sense part of him” (Pask 1962).


– Jan Mueggenburg (Leuphana University Lueneburg)


Jean-Pierre Dupuy, The Mechanization of the Mind: On the Origins of Cognitive Science, Princeton 2001.

Heinz von Foerster, Perception of the Future and the Future of Perception, in: Instructional Science 1 (1972) no. 1, pp. 31-43.

Heinz von Foerster; Paul Schroeder; Frank Galuszka, Unpublished Interview 1997.

Katherine Hayles, How we became posthuman: virtual bodies in cybernetics, literature, and informatics, Chicago 1999.

Warren S. McCulloch, “Living Models for Lively Artifacts”, in: Arm, David L. (Hg.), Science in the Sixties. The Tenth Anniversary AFOSR Scientific Seminar, Albuquerque: University of New Mexico 1965, S. 73-83.

Jan Mueggenburg, “Lebende Prototypen und lebhafte Artefakte: Die (Un-)Gewissheiten der Bionik”. in: Ilinx. Berliner Beiträge zur Kulturwissenschaft, Bd 2 (2011), pp. 1-20.

Gordon Pask, “My Prediction for 1984”, in: Roger Bannistor (Ed.), Prospect. The Schweppes Book oft he New Generation, London 1962, pp. 207-220.

Andrew Pickering, The Cybernetic Brain, Chicago: University of Chicago Press 2010.



Comments are closed.