Liability for Harm Resulting from the Use of Artificial Intelligence Technologies: An Islamic Legal Study

Author

Al-Azhar University

Abstract

This study explores the question of legal liability for harm caused by the use of artificial intelligence (AI) technologies, from the perspective of Islamic jurisprudence. It examines the nature and types of AI, its growing presence in various sectors, and offers a juristic analysis of how Islamic law may classify AI in terms of legal capacity (ahliyyah). The study draws on analogies from classical jurisprudence, such as liability for harm caused by animals or legally dependent persons, to assess how similar frameworks might apply to intelligent systems.
Through applied examples—particularly in the fields of transportation and medicine—the study considers how responsibility should be allocated when AI technologies cause harm. It adopts a combined methodological approach, drawing on both foundational legal reasoning and analytical tools to assess contemporary realities.
The research affirms that, under current legal assumptions in Islamic law, AI systems such as robots are considered non-autonomous tools and therefore lack legal capacity. They are not, in themselves, subjects of liability. However, the study also notes that there is no juristic obstacle to assigning a form of limited legal capacity or financial personality to AI in future legal frameworks, should such a move be necessary to address evolving technological realities.
Importantly, the study holds that liability for harm extends to any party directly involved with the design, development, ownership, or use of AI systems—such as manufacturers, programmers, and operators—provided that the elements of liability are present and actual damage is established.
Among the recommendations is the urgent need for international conferences and legal workshops to raise awareness among AI developers and users of the ethical and legal implications of this technology. The study further calls for stricter oversight of companies involved in AI production and deployment, to ensure responsible usage and safeguard against risks that could threaten public safety and welfare.

Keywords

Main Subjects