Dr. Gabriel Hallevy of the Ono Academic College Faculty of Law has posted The Criminal Liability of Artificial Intelligence Entities (2010). Here is a summary:
This article attempts to work out a legal solution to the problem of the criminal liability of AI entities. At the outset, a definition of an AI entity will be presented. Based on that definition, this article will then propose and introduce three models of AI entity criminal liability: the perpetration-by-another liability model, the natural-probable-consequence liability model, and the direct liability model.
These three models might be applied separately, but in many situations, a coordinated combination of them (all or some of them) is required in order to complete the legal structure of criminal liability. Once we examine the possibility of legally imposing criminal liability on AI entities, then the question of punishment must be addressed. How can an AI entity serve a sentence of imprisonment? How can death penalty be imposed on an AI entity? How can probation, a pecuniary fine, etc. be imposed on an AI entity? Consequently, it is necessary to formulate viable forms of punishment in order to impose criminal liability practically on AI entities. [footnotes omitted]
Tags: Criminal liability of intelligent agents, Criminal liability of robots, Criminal liability of software agents, Gabriel Hallevy, Legal liability of intelligent agents, Legal liability of robots, Legal liability of software agents, Robotics and law, Robots and law