Machine interpreting (MI), like any emerging technology, presents a range of ethical challenges that require careful consideration and governance (Cath, 2018; Floridi, 2021). Designed to enhance communication and understanding across language barriers, from everyday interactions to high-stakes scenarios, this technology has the potential to significantly impact diverse areas of human life. For this reason, its application must be managed responsibly to ensure ethical integrity.
The ethical challenges associated with AI solutions, encompassing MI, can be generally categorized into three primary scenarios1:
- Overuse. This scenario arises when AI systems are deployed without demonstrable necessity, leading to superfluous resource consumption. The economic and environmental costs can be considerable, given the substantial energy and computational requirements of these technologies. As AI is frequently deployed at scale and offered at minimal or no direct cost, the potential for overuse is significant, thereby amplifying both financial inefficiency and ecological impact.
- Misuse. This scenario arises when the technology is employed in situations where it may cause harm. The accuracy and appropriateness of MI systems, for example, vary based on several factors, including language pairs, context of the communication, cultural nuances, technical limitations, and expectations. Using MI in sensitive or high-stake environments, such as legal proceedings or medical consultations, without adequate safeguards can lead to misinterpretations with serious consequences. Such misuse is not only unethical but also potentially harmful and should be regulated to prevent adverse outcomes.
- Underuse. This scenario pertains to instances where an AI system, despite its capacity to substantially improve communication and accessibility, remains underutilized. Such underutilization is ethically problematic, as it withholds the benefits of diminished language barriers from potential beneficiaries. Furthermore, it represents economic inefficiency, given the technology’s potential to provide cost-effective accessibility solutions. The underutilization of machine interpreting (MI) can arise from factors such as limited awareness, technological constraints, or resistance to innovation. Addressing these impediments is essential to realizing the full potential of this technology’s positive impact.
The aforementioned categorization, in my view, provides a useful general framework for guiding the responsible adoption of this technology. However, several factors require further definition in practical terms. These include determining the authority responsible for defining ‘high sensitive scenarios’ and establishing metrics for acceptable translation performance. From a technical and legal perspective, machine interpreting (MI) systems present critical aspects necessitating responsible management and regulation. Addressing these aspects in a robust, forward-looking, and unbiased manner is challenging. To illustrate, the following key areas of concern, while not exhaustive, warrant consideration:
- Confidentiality. At the time of writing, machine interpreting (MI) applications are predominantly cloud-based, rather than operating on the edge, which exposes them to potential data breaches and risks of inappropriate data usage. Ensuring confidentiality necessitates robust encryption, secure storage, and stringent access controls. Demonstrating compliance with these measures, for example through certifications such as ISO 27001 and SOC 2, is crucial.
- Data ownership. MI systems process sensitive data, which raises ownership and privacy concerns. Clear policies and compliance with current and future regulations like GDPR are crucial for safeguarding user data.
- Appropriate use. The effectiveness of machine interpreting (MI) systems is contingent upon language pair, situational complexity, and cultural nuances. Given the rapid evolution of this technology, guidelines for appropriate usage must adapt correspondingly. Stakeholders require clear directives to prevent misuse in critical contexts. In regulated sectors, such as legal and healthcare settings, certifications can ensure system reliability.
- Liability. Accountability for translation errors requires clarity. Balanced regulations are needed to ensure quality and innovation without eroding trust or stifling development.
- Ethical AI and bias mitigation. MI systems, as any other AI system, must address biases to reduce stereotypes and prevent discrimination.
Addressing these and related challenges necessitates the development of a balanced approach that optimizes benefits while mitigating risks (Floridi et al., 2018). This approach should prioritize the end-user, their needs, and their dignity, rather than the interests of other stakeholders, such as interpreters, scholars, and industry representatives. It requires ongoing evaluation of the technology’s impact, the continuous refinement of ethical guidelines, and ensuring that deployment aligns with societal values and needs. Collaboration among stakeholders, including developers, users, and policymakers, is crucial for establishing standards and regulations that guide the responsible use of machine interpreting (MI) in high-stakes scenarios, without unduly restricting its application in other contexts.
Beyond practical considerations, there are further avenues of reflection regarding the proliferation and impending ubiquity of machine interpreting that, while less immediately actionable, merit attention. Machine interpreting ambitiously purports to offer unrestricted access to spoken content across linguistic barriers. While this objective is ostensibly commendable and warrants pursuit, it harbors subtle risks that necessitate careful consideration. For example, the interconnectedness of individuals, facilitated by the internet and social media, while fostering increased information exchange and knowledge accessibility, has simultaneously triggered societal polarisation and various negative consequences (Becker et al., 2019). In a similar vein, unrestricted access to information through machine interpretation could yield both advantages and disadvantages.
On the positive side, MI offers the potential for enhanced and autonomous dissemination of information and knowledge. While the exclusive provision of services by professionals provides numerous advantages, including the assurance of expertise and the high-quality standards that professionals can deliver, only machines have the possibility to make accessibility available to everyone (Susskind and Susskind, 2017)2. Conversely, the ubiquitous availability of spoken language translation presents the risk of exacerbating radicalization and ideological polarization. Artificial intelligence fosters the perception that all content can, and should, be rendered accessible across all languages and cultures. However, not all content generated by individuals can be meaningfully translated without adequate contextualization of cultural, historical, and sociological nuances that distinguish effective translation. Certain content is deeply embedded within specific cultures or subcultures, deriving its significance solely from that context. Consequently, translation without cultural mediation or contextualization becomes futile or even counterproductive. In such instances, machine interpreting (MI) is likely to prove inadequate or perform poorly, potentially amplifying misunderstandings and polarization.
BIBLIOGRAPHY
Cath, C. 2018. Governing artificial intelligence: Ethical, legal and technical opportunities and challenges. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. Vol. 37(2133).
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., . . . Vayena, E. 2018. AI4People – an ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707.
Susskind, R., & Susskind, D. 2017. The Future of the Professions: How Technology Will Transform the Work of Human Experts. Oxford University Press edition.
Becker, J., Porter, E., & Centola, D. 2019. The wisdom of partisan crowds. Proceedings of the National Academy of Sciences, 116(22), 10717–10722.
This text is based on a section of my chapter Fantinuoli C. “Machine Interpreting”. In Sabine Braun, Elena Davitti and Tomasz Korybski (ed.) Routledge Handbook of Interpreting and Technology. Routledge (2025)
- See Floridi et al. (2018) for the general theoretical framework used here. ↩︎
- Susskind and Susskind (2017, p. 33) note that ‘[m]ost individuals and organizations find it challenging to afford the services of top-tier professionals’, and that the use of AI might extend accessibility, where now only a limited number of people can actually avail themselves of these services. ↩︎