Lesson 1: "Many tasks that humans solve using intelligence can be solved by machines without requiring human-like intelligence."

In my opinion, one of the most profound insights on AI was articulated years ago by philosopher Luciano Floridi. He asserted, in a strikingly simple way, that while humans may be special in many respects, the tasks they perform are not inherently unique. Stripping an agent of what we broadly define as human intelligence does not necessarily prevent it from achieving the same goals—often in a more efficient manner than humans.
This perspective resonates with me for its practical implications. We do not need to replicate animal or human mechanical or cognitive skills to accomplish the same tasks that requires those skills in animal or humans. Think about airplanes.

Leonardo da Vinci’s early misconceptions about flight were rooted in the belief that human innovation had to imitate nature’s mechanics. He thought a flying machine would need flapping wings to be able to fly. In contrast, modern aviation succeeded by studying aerodynamics rather than imitating birds. While he was undoubtedly a genius, he was wrong in this assumption, as we know today. Many people commit the same error when thinking of AI. To illustrate this concretely: producing translations at a quality level comparable to—or even surpassing—that of humans does not require, at least in principle, replicating the cognitive skills of a translator. We can achieve this goal with other means. This is what AI is doing. When this can be achieved is open to debate. It required centuries for humans to learn to fly… It is a hard bite to digest, I know.
Lesson 2: "People appreciate technological progress until it conflicts with their personal interests."

There is no doubt that people, generally speaking, appreciate technological advancements. Few would choose to live in an era devoid of the conveniences we take for granted today. Throughout history, progress has consistently improved our lives, often to the point where these advancements become inseparable from our daily routines. We take comfort in knowing that a washing machine handles our laundry, that we can video call loved ones across the globe, and that medical devices and treatments reduce mortality while enhancing our quality of life. Technological progress has left its mark on nearly every aspect of our existence. Steven Pinker’s book Enlightenment Now: The Case for Reason, Science, Humanism, and Progress argues that despite the challenges associated with progress—such as the climate crisis—we are generally better off now than in the past.
The same applies to AI. Generally speaking, as with any technological progress, the benefits of AI will, in the longer term, put humanity in a better position than we are in now (listen to the Nobel Prize lecture: Demis Hassabis, Nobel Prize in Chemistry 2024). This is true despite the many challenges and risks we must face and overcome. What is happening, however, is that most people can accept and applaud progress, celebrating AI as a general concept, until it begins to affect them on a personal or professional level.
For example, interpreters may initially view speech recognition technology as a useful tool, developing a positive attitude toward its advancement and adoption. However, this attitude may shift dramatically when personal interests—specifically, their professional livelihood—are at stake. Speech translation, a technology similar to speech recognition and aimed at achieving the same kind of accessibility, may then be perceived as a threat to be opposed.
This reaction is not unusual; in fact, it is deeply rooted in human behavior. We instinctively embrace what benefits us and resist what threatens us.
What’s particularly intriguing is how this ambivalence makes us irrational in our moral compass. So, for example, many criticize artificial intelligence as harmful or unethical because it disrupts their jobs, yet they applaud AI-powered tools that let you do your tax declaration without an accountant. Or, representatives of professional associations, in danger ov obsolescence because of AI, might argue against AI disruption with the fact that AI is environmentally unsustainable—a valid concern—but are happy to travel internationally by plane to attend flagship conferences organized by their profession associations. This ambivalence in morality is a clear example of selective judgment—a cognitive bias that skews our perception of what is good and wrong. It is one of the most human traits: thinking driven not by rational judgment, but by personal interests and preferences.
Lesson 3: "Because of the hype, many people seem to believe that technological progress, particularly AI, is the go-to solution for any problem, even for those that cannot inherently be solved by it."
If you navigate the corporate world daily or pay close attention to how companies market their products, you may have noticed an interesting phenomenon: AI appears to be everywhere. Conversations often sound like this: “Let’s solve this issue with AI,” or “Let’s add AI to this product,” or even “We need to incorporate AI somewhere—let’s find the right place.”
The motivations behind this trend are varied. Some companies want to appear innovative—since innovation is still associated with being cutting-edge and “cool” (this might change anytime soon, as this study reveals, something I tend to agree with). Others aim to be perceived as forward-thinking, adding perceived value to their offerings. Some may genuinely believe that AI is a need solution, for example to cut costs and remain competitive.
What is interesting is that while AI can indeed provide effective solutions to many challenges, it is surely not a panacea. Often, problems are tackled with complex AI systems when a simple decision tree would suffice—offering greater reliability, security, speed, and simplicity. In some instances, the situation becomes even more paradoxical, with AI expected to solve problems that AI simply cannot address.
A prime example of this paradox is the phenomenon of automated fact-checking, a focus of both practical implementations (such as those used by social media platforms) and extensive research within the natural language processing community. While certain aspects of fact-checking can be addressed through algorithms, the fundamental challenge lies in defining truth itself (see for example the article “Understanding the Promise and Limits of Automated Fact-Checking“). While AI can solve this challenge in case of simple examples, the complexity grows exponentially when we consider the nuances of real-world events and the diverse perspectives that shape them. In practice, many fact-checking systems risk becoming tools of censorship rather than arbiters of truth. Why? Because the underlying issue is that we cannot expect any agent —human or machine— to determine absolute truth in a world that is rarely black and white. A classical example of unsolvable problem.
Lesson 4 "While language is an extraordinary tool that has arisen in humans over the course of evolution, there is nothing inherently magical about our language skills."

In his great book “The language instinct”, Pinker magnificently describes the ‘magic’ (my word) of our human language instinct. Because language is truly something fantastic we master so naturally, we believe that language, like other human skills, are inherently magical, exclusive to humans, and irreproducible. While it is truth language is a human trait that sets us apart from other species -no animal has developed it, or is able to learn it- many tend to conclude that machines, too, will not be able master this ability. Language is an instinct. You have it or not.
They often forget that, throughout history, humans have created things far superior to themselves, using the gifts nature has bestowed upon us. For example, we invented automobiles that move much faster than we can run. Or consider chess: humans invented the game but also developed machines that can play it better than we ever could. The same principle applies, my point, to language. Now that we can interact with large language models, an invention of the human mind, this concept should have become self-evident.
While some may find it difficult to accept, language is not imbued with mystical properties. Despite its complexity—with layers such as pragmatics and emotional nuance—language remains a rational phenomenon governed by material processes. As a result, language-based activities, such as communication, translation, and interpretation, can be replicated with increasing accuracy and sophistication. As mentioned earlier, this does not require machines to replicate the exact mechanisms of human cognition but rather to use different approaches to achieve comparable results.
Lesson 5 "Professionals believe AI will never be able to perform their job because they either overestimate the complexity of their own work or underestimate the capabilities of AI."
When I engage with professionals they often argue that AI cannot replicate their work at a comparable level of quality due to the inherent complexity of the tasks and skills involved. They assert that real-time translation, for instance, entails more than simply converting words from one language to another; it requires understanding the situation, culture, context, and nuances. Because these elements are so multidimensional and difficult to define in precise, logical terms, they conclude that machines will never be capable of performing such tasks as they do. There is a fallacy in how this opinion arises (it has some similarities with the famous Dunning-Kruger Effect).

While their assessment of the complexity of their profession is valid—they are, after all, domain experts with a deep understanding of how intricate their work truly is—their grasp of the AI science tends to be limited. They are not AI scientists or researchers, after all. This creates a skewed perspective: they possess profound knowledge about the demands of their work but have simplified insight into the science that could one day lead to replicate these tasks. In other words, the fallacy leading them to assert that a machine will never be able to perform their job stems from this imbalance in knowledge.
Lesson 6 "Bringing technology into real-world applications is just as challenging as developing the underlying technology."
Technological products and services available to the general public typically consist —at an abstract level— of two key components: the underlying technology and the application through which users interact with it. For example, take ChatGPT: the underlying technology is a Large Language Model—an advanced system that enables us to retrieve information, ask questions, translate texts, and more. The application, on the other hand, encompasses the user interface (UI) and user experience (UX); elements that leverage this underlying technology to help users achieve specific goals. In this case, it includes features like account management, the way users interact with the model via the interface, and the overall user journey. These two elements are distinct but equally important.
While the development of new technologies often takes center stage—rightly so, given the complexity, expense, and innovative force required to create groundbreaking advancements—the creation of high-quality applications that translate these technologies into accessible, usable products is just as challenging. Consider machine translation: building a model that delivers high-quality translations from one language to another is undeniably complex. However, ensuring that the application meets the user’s needs is an entirely different challenge. It requires a deep understanding of user requirements, mastery of the underlying technology, innovative power to imagine and realize features on top of the technology, and a relentless focus on refining the user experience, iteration after iteration, based on feedback and testing.
Lesson 7 "The morality is in the eyes of the beholder: how the same technology can augment one person and replace another."

Many people claim that AI is simply a tool designed to augment humans, not replace them. Other same that AI is a tool to replace humans, not augment them. The interesting point here is realizing how much this depends on the perspective of the observer. What one person sees as support, another might perceive as a replacement.
Take machine translation (MT) as an example: for a lawyer who needs to access foreign documents, MT acts as a tool that enhances their productivity and independence. From the lawyer’s perspective, this AI tool is a form of augmentation—something that supports and empowers them to complete their work more efficiently.
On the other hand, for a translator, the situation looks very different. When a lawyer uses MT in place of a translator’s services, the same tool is viewed as a replacement, not a support. While the lawyer benefits from the AI tool, the translator may see and experience it as a threat to their livelihood (less interesting here that MT can also be seen by the translator as a tool to augment their work).
Lesson 8 "Professionals generally agree that AI can automate many jobs—just not their own."

I encountered this idea for the first time in the book by Susskind and Susskind, “The Future of the Professions” almost ten years ago. Professional tend to agree that AI can automate many tasks, just not the ones that make out their own profession. It is probably not that difficult to understand why: for many people it is natural to understand the complexity of their own work and, as a result, assume that many other jobs are less complex and therefore, so the assumption, more suitable for automation. This attitude is prevalent across almost any profession. If you read forums where professionals from different fields discuss AI and automation, you’ll often encounter a recurring theme: “That job can be automated, but not ours. Everyone affirming the opposite has no idea about what we do”.
For example, an accountant might acknowledge that much of the work involved in translation can be handled by machines, and they will see this as a positive technological development, helping them dealing effectively with foreign clients. Yet, they’ll assert that their own job cannot be automated. Too complex. Similarly, ask a translator, and they’ll agree that tasks like accounting are ripe for automation and would welcome such technological advancements, but they’ll insist that the quality of their translation work could never be replicated by a machine. Too complex. I’m not sure if this phenomenon has a specific name, but I think it should.
Lesson 9 "Following Copernicus, Darwin, and Freud, it is now Turing's turn (or any other AI pioneer) to remind us that we’re not as exceptional as we like to believe."
Following in the footsteps of Copernicus, Darwin, and Freud, it is now Alan Turing—or any other AI pioneer’s—turn to remind us that we are not as exceptional as we often like to believe. Throughout history, our understanding of ourselves has been continuously challenged by groundbreaking ideas that have shifted the human-centered view of the world.

Copernicus upended the belief that Earth was the center of the universe, showing us that we are not the focal point of the cosmos. Darwin followed with the theory of evolution, revealing that humans are not uniquely separate from other species, but are instead part of a broader, interconnected web of life. Freud, too, shattered our illusions of self-importance by proposing that our conscious thoughts are only a small part of a much larger, often unconscious, psyche.
Now, with the rise of artificial intelligence, we face a new challenge to our sense of uniqueness. For centuries, we have believed that certain skills, from complex reasoning to creativity, are inherently human traits—skills that could never be replicated by machines. After all, what could be more uniquely human than our ability to understand language, solve intricate problems, or express emotions through art? But as AI continues to evolve, it becomes clear that these abilities are not as exclusive to humans as we once thought. Machines can now translate languages, generate music, and even perform complex calculations—tasks we once considered beyond the reach of technology.
This belief in our exceptionalism has been proven wrong time and time again. The more we learn about the natural world, the more we realize that we are part of a larger, interconnected system. And the more we advance technology, the more we see that many of the skills we once thought were uniquely human can be replicated, and sometimes even surpassed, by machines.
AI may not be able to replicate the full depth of human experience or consciousness just yet, but it is already challenging the notion that our intelligence, creativity, and skills are so fundamentally different from those of machines. It reminds us that, like Copernicus, Darwin, and Freud, Turing and his successors are not here to diminish our significance, but to humble us, reminding us that our place in the grand scheme of things is far less exceptional than we once imagined.
Lesson 10 "AI is yet another transformative technology, like many before it. Some will benefit while others will face setbacks."
AI represents yet another transformative innovation in a long history of technological breakthroughs that have reshaped the way we live, work, and interact with the world. Much like the inventions of the printing press, the telephone, or the internet, AI holds the potential to revolutionize industries, enhance productivity, and improve the lives of billions (see for example this paper). However, as with any significant leap forward, this progress comes with a mix of opportunities and challenges—some will benefit greatly from the innovation, while others may face setbacks (see this Forbes article). For example, my mother lost her job as a typist in her 30s due to the advent of personal computers, which enabled people to handle typing tasks themselves instead of outsourcing them to professionals.

One modern example of this duality can be seen in the field of translation. The rise of AI-driven translation tools has made good-quality translations more accessible than ever before. These technologies enable millions of people worldwide to communicate across language barriers with ease, whether for personal, business, or educational purposes. The ability to quickly translate text or speech improves access to information, supports international cooperation, and fosters greater cultural exchange. In this way, AI-driven translation technology is in the process of undoubtedly making life easier and more interconnected for billions of people.
However, this convenience comes at a cost, particularly for professionals in the translation industry. As AI translation tools continue to improve, some sectors of the translation market are seeing diminishing demand for human translators, especially in areas where high volumes of translation work are required with less emphasis on nuance or creative interpretation (see this recent survey). For example, machine translation is increasingly used in business communications, websites, and even in legal documents, which has led to fewer opportunities for translators in these fields. While there will continue to be a need for human translators in specialized areas—such as literary translation or situations requiring risk control—many translators now face increased competition from machines that can complete certain tasks faster and at a lower cost.
This is a familiar pattern seen throughout history with other innovations: some sectors and individuals thrive in the face of change, while others may face, unfortunately, obsolescence.
Great points about AI’s impact on human roles, especially in the translation industry. I wonder though, could AI and humans collaborate more in creative sectors, where AI handles repetitive tasks and humans focus on the nuanced, emotional elements?