It doesn’t happen often, but every now and then, it does. You come across something — an idea, a concept, a phrase — and it hits you like a lightning bolt. Suddenly, everything clicks. It could be a brand-new discovery, the articulation of an intuition you’ve long had, or the moment when a vague notion finally gets a name (and we all know how crucial naming is for conceptualizing ideas and therefore reasoning). Sometimes, it’s a mix of all these elements at once.
This exact experience happened to me a few days ago when I read an article published in Nature titled “The case for human–AI interaction as system 0 thinking“ by Chiriatti (Manager at Lenovo) and his team.
This article explores how the way humans think and make decisions is evolving, driven by the growing influence of technological tools — particularly artificial intelligence systems. The authors ask a fundamental question: are these tools becoming an extension of our minds?
The notion that the mind can extend beyond the brain has been a subject of philosophical and scientific debate for decades. According to David Chalmers’ Extended Mind Thesis, certain external artifacts can become functionally integrated into our cognitive processes, effectively forming a broader cognitive system.
Building on this concept, the article draws on Daniel Kahneman’s model of human thought, which divides the mind into two systems: System 1 (fast, intuitive thinking) and System 2 (slow, rational thinking). The argument is that both Systems 1 and 2 are now increasingly reliant on information provided by data-driven technological tools, such as Google Search, Google Maps, ChatGPT, and other AI-powered assistants. These tools process vast amounts of data and offer users information, acting as intermediaries between the external world and our cognitive processes.
As a result, Systems 1 and 2 are no longer solely dependent on information obtained through direct experience or traditional learning. Instead, they are significantly influenced by the outputs of these external, data-driven systems. Take, for example, as the author’s explain, this common situation: finding directions home or deciding where to buy a ticket to Rome. Our immediate, instinctive response (System 1) and our more deliberate, analytical reasoning (System 2) now both rely on the information generated by search engines, knowledge databases, or AI-powered chatbots. This technological integration has become so seamless that we may not even be aware of how deeply our thoughts and decisions are shaped by these external systems.
To better understand and define this phenomenon, the authors introduce the concept of “System 0”. System 0 refers to the technological layer that operates before and beneath Systems 1 and 2. It represents the combined capacity of data-driven tools to process, filter, and present information that shapes our thinking and decision-making. Unlike Systems 1 and 2, which are internal to the mind, System 0 operates externally but is no less influential.
Interesting enough, it is called a “system” because it spans multiple mechanisms (search engines, recommendation systems, AI assistants) and interacts dynamically with users. Its content is not fixed but emerges from these interactions, shaping the inputs that Systems 1 and 2 act upon. System 0, then, is more than just a passive information source — it’s a cognitive partner that filters and pre-selects the knowledge and possibilities we consider. By recognizing the existence of System 0, we gain a new perspective on how modern tools are shaping not only what we know but how we think and how we work.
Computer-assisted translation and interpreting tools, especially when integrated with machine translation and suggestion mechanisms, are a good example of System 0. Think about the prepopulation of translated segments in a CAT tool for translators or the automatic suggestions of terms and numbers in a CAI tool for interpreters. These tools become the new information layer through which knowledge is processed, effectively acting as a form of external cognition. Depending on the extent to which this layer is integrated into the workflow, we can argue that the cognitive activities of both translation and interpreting are significantly altered. What’s particularly fascinating is how this external layer, which operates so seamlessly in the background, influences the decisions made during the translation or interpreting process — from selecting words and phrases to determining the structure of a message. There are already good research projects within translation and interpreting studies that move to ecplorenthis, even if they do not make use of Chiriatti terminology.
Similarly, and to my knowledge completely unexplored, viewing machine interpreting as a form of System 0—essentially, a new layer of information processing within multilingual communication—presents a fascinating lens through which to examine this technology. How will our cognition change when we will be able to understand and be understood in multiple languages, even when we do not speak those languages? The system 0 perspective might allow us to assess the long-term cognitive effects of using such technology on our minds. I believe this is a compelling avenue for further research, as it raises important questions not only about the evolving nature of expertise in translation and interpreting, but also about the broader implications for communication among the broader public. How will these tools shape the way we understand and navigate multilingual interactions in the future? The answers to these questions could significantly alter our approach to both language and communication.
This idea of ‘System 0’ then raises a big sociological question: as AI tools shape our decisions, there might be a growing gap between those who understand and control them and those who just passively rely on them. Then should we prevent over-dependence and make sure people are educated to use these tools critically and wisely? And how? ChatGPT user self-tutorial? ;o)