Since the early 1960s, Artificial Intelligence has cherished the ambition to design an artificial cognitive machine able to reproduce intimate aspects of human behaviour. Distributed Artificial Intelligence and its most recent avatars—Multi- Agent Systems—have developed the concept towards social interactions and societal dynamics, attracting the attention of sociologists and ethnographers who found new ways to elaborate or validate their theories. But populations of cognitive agents aren’t the real thing, despite the efforts of their designers. Furthermore, one must cautiously examine the rationale behind these often incredibly complex arrangements of algorithms, in order to assess the usefulness of such exercises. As a matter of fact, Artificial Intelligence relies on a very positivist, and sometimes reductionist, view of human behaviour. For centuries, from Bacon to Pierce, philosophy of mind has provided meaningful insights that challenge some of these views. More recently, post-normal approaches have even taken a more dramatic stand—some sort of paradigm shift—where direct knowledge elicitation and processing override the traditional hardwiring of formal logic-based algorithm within computer agents. Keywords: Agent-Based Modelling, Artificial Intelligence, Icon, Idol, Philosophy of Mind, Cognition.
Perez, P. (2006). Agents, idols, and icons. In P. Perez & D. Batten (Eds.), Complex Science for a Complex World: Exploring Human Ecosystems with Agents (pp. 27-56). Canberra, Australia: ANU E Press.