Criminal Liability Arising from the Use of Artificial Intelligence Technologies: A Legal Analysis.

Author

Mansoura University

Abstract

This study explores the question of criminal liability in relation to the use of artificial intelligence (AI) technologies, with a particular focus on identifying who may be held accountable—whether a natural person or a legal entity—and the types of penalties that may be imposed in the event of criminal misconduct involving such technologies.
Employing a descriptive and analytical approach, the research examines existing legislative frameworks, drawing on statutory texts, constitutional provisions, judicial rulings, and comparative legal systems. It assesses the extent to which current laws, including Egypt’s Law No. 175 of 2018 on cybercrime, address the evolving challenges posed by AI applications.
The study finds that AI represents a major driver of technological and human progress, offering profound benefits across many sectors in the service of public welfare and safety. Yet, the very capabilities that make AI valuable—its autonomous decision-making and simulation of human behavior—also give rise to legal uncertainties and gaps in accountability. This necessitates the development of clear and adaptable legal rules that reflect the unique nature of AI systems.
One of the study’s central conclusions is that AI-related criminal liability is legally conceivable, though current legislative models vary in how they approach the issue. The research emphasizes the importance of creating a regulatory framework that holds AI developers, owners, users, and third parties accountable, and considers the potential value of assigning a form of hypothetical legal personality to AI systems to facilitate enforcement and ensure justice.
Among the recommendations offered are: amending existing laws to address legislative shortcomings; codifying liability standards for all actors involved in AI technologies; establishing an independent oversight body to regulate AI use; and imposing strict penalties on corporate entities found responsible for misuse. The study further highlights the need for robust scientific research and international collaboration—through treaties and bilateral agreements—to ensure the responsible and ethical deployment of AI technologies in a rapidly changing digital environment.

Keywords

Main Subjects