Eran Kahana, Esq., of CodeX: The Stanford Center for Computers and Law, and DataCard Corporation, has posted several resources related to his project entitled Autonomous Intelligent Cyber Entity (AiCE):
Here is the abstract of the project, according to the CodeX Projects page:
This project explores the commercial and legal aspects/implications of an intelligent cyberagent and its evolution into an autonomous intelligent cyber entity (AiCE, pronounced “ice”). It evaluates and builds functional and operational schemas for standardizing AiCE with an emphasis on reducing waste of judicial resources, increasing e-commerce transactional certainty and expanding into new frontiers for e-commerce interactivity on B2B, B2C and C2C levels.
Mr. Kahana has posted a position paper entitled Application of an Autonomous Intelligent Cyber Entity as a Veiled Identity Agent. Here is the abstract:
This position paper is based on the CodeX “Autonomous Intelligent Cyber Entity” (AiCE) book project. The AiCE draws on its intelligent and computational law capabilities to promote a safer, more efficient on-line environment. It is a flexible framework that can be configured to serve various purposes in business and individual-user settings. Due to its autonomous, intelligent and broad range functional capabilities, the success of AiCE is dependent on its entity status being formally recognized by law. This paper describes how this status can be granted by building on the same legal principles that endow U.S. corporations with an “entity” status; and while the focus here is purposefully narrowed to U.S. law, the same principles have universal application (a subject dealt with comprehensively in the book). Where the particular intention is to better protect a user’s private data, the AiCE can be configured by a user into an “AiCE Veiled Identity Agent” (AVIA). This AiCE configuration shields the user’s private information and offers him a “veiled” identity similar to that which corporate shareholders enjoy, all without degrading the flow of information vital to innovation and new value generation. This paper concludes with the introduction of the Uniform AiCE Transactions Act (UATA), an intelligent legal framework designed to govern all AiCE activity, promote trust and widespread adoption of this model.
Video and audio are available for Mr. Kahana’s 20 January 2010 presentation at Stanford Law School entitled Modeling and Implementing the Autonomous Intelligent Cyber Entity. Here is the abstract:
The Autonomous Intelligent Cyber Entity (AiCE) CodeX project models and examines the broad spectrum of legal and commercial issues surrounding the implementation of hyper-intelligent software code that has independent decision-making and computational law capabilities. AiCE’s functional scope is wide-ranging, limited only by what humans will be comfortable with allowing it to perform. It is able to, for instance, evaluate, negotiate, execute and monitor performance of (online) contracts. In that capacity, it can behave as a human party is expected to (including acting as an agent) yet it is unhampered by variables that ail humans, such as those that lead to complications and uncertainty relative to consent and effective post-execution performance monitoring. Using AiCE also raises numerous questions and challenges. A sampling of these will be discussed, and a conceptual preview of a novel legal framework called the “Uniform AiCE Commercial Transactions Act” (UATA) will be offered as a vehicle by which to resolve them.
Mr. Kahana yesterday published a new blogpost on this topic, entitled Germinating Seeds of Agency, at The CodeX Blog. Here is an excerpt:
It is inevitable and only a matter of (a short) time that in this reimagined reality, the use of some avatars will transcend the aquarium-like current environment in which virtual worlds exist. These new and improved avatars will take on the form of AiCE. They will get smarter in this evolutionary window; garner rights; be capable of autonomous operation in certain (initially) limited respects; be exponentially able to quickly learn and adapt to their new environment. In parallel, some users will decide to enable their AiCE to conduct business on their behalf (not just buy stuff on Amazon or eBay), use real-world money, shield their real-world identity (in the form of an AiCE Veiled Identity Agent) and bring about real-world consequences, both positive and negative in ways that are no longer confined to the virtual world aquarium. In this reimagined reality, the need for AiCE to be an Agent will no longer contain the should-factor; thus the seeds of the electronic Agent begin to germinate.
In May 2010 Mr. Kahana published a post entitled Artificial Intelligence Dynamic Rights at The CodeX Blog. Here is an excerpt:
In contrast to HCT [i.e., Human Centric Tasks "that are inherently (and currently) the domain of natural persons"], ENT [i.e., Entity Neutral Tasks, "that can be, from a normative perspective, comfortably performed by non-humans"] configurations do not involve taking decisions that exert moral choice. These are, therefore, dispositive of non-malleable inquiries into liability, punishment, deterrence, and so forth. We can thus see that the HCT/ENT distinction serves to delineate as to the circumstances in which AiCE rights analysis is relevant. It also answers the question posed above (if only partially so) by stating that: (1) Not all AiCE/AI configurations will receive any rights; (2) only AiCE/AI that require rights for performance of their mission will receive them and (3) which rights AiCE/AI receive will depend on the type of task they are configured to carry out.