Summary of AI as Legal Persons: Past, Patterns, and Prospects
“AI as Legal Persons: Past, Patterns, and Prospects” by Novelli et al. (2025) is a recent paper that discusses the debate about AI legal personhood considering both established legal theories and recent developments with an EU focus. They advocate a layered risk-management approach to the regulatory framework for AI autonomy and conclude that in the near to mid-term practical interpretations of AI’s legal status will likely either center on AI as an extension of human capabilities (e.g., personal virtual agents or brain machine interfaces) or as a human-machine hybrid that is regulated as an association with a natural person at the core[1]. This latter notion of juridical personhood, while potentially dualistic, would be difficult to reconcile with a Christological conception of dual personhood, as in the person of Christ having a mortal body and an immortal spirit, as AI is a spiritless entity with no immaterial element[2]. Herein, we will discuss path-dependent directions for limited initial elements of near to mid-term legal personhood for specialized AI found in Novelli et al., then form counterarguments that raise objections to advanced general AI (artificial general intelligence or “AGI” and recursively self-improving universal Turing machines that are superior to humans in most cognitive domains or “superintelligence”) personhood in the long term (20+ years from now).
Perspectives on AI as Legal Persons and an Argument for Hybridized Personhood
A viral primer on the legal personhood of AI can be found in Solum (2020) and falls in line with the pragmatic functional and consequentialist[3] approaches to AI personhood and governance found in Novelli et al. In pursuing the line of inquiry with these philosophical approaches, they make a distinction between singularist and clustered views of legal persons. A singularist view looks at whether an entity possesses a functional basis for holding a singular right or duty. For instance, should an AI have the function of sentience, then it would follow it may be assigned the rights or duties assigned to conscious creatures. This would be a non-instrumental rationale based on its intrinsic properties. A clustered view is more concerned with the practical consequences of an entity’s roles and the corollary legal statuses. For example, a GPT-like LLM engaged in generative speech already has some factual elements of a semantic person and may justify us to infer normative consequences, though what a clustered view might consider is, to the extent that this AI is utilized in legal communication and has a presence within the law with which we must come to grips, whether it might have the powers entrusted to agent of law such as a trustee or paralegal.
This latter example, where an AI agent participates in social institutions as an algorithmic token with social consequents of autonomous personhood, gets to the heart of the authors argument for recognizing hybrid human-machine manifolds as legal persons. From this perspective of actor-network theory, these hybridized systems have algorithms with meaningful social personification and, thus, have factual roles as social persons in important relations to natural persons which warrant legal personification correspondent to their relational statuses as autonomous agents in the social network. Our anxiety is that this pragmatic consequentialist analysis provides a pathway to fuller legal personhood for far more robust artificial intelligences that are to come in the long run. This would be anathema to a Biblical worldview and the Bible contains warnings about this situation from four millennia ago.
How it Might Happen: Some Informed Conjectures
So, it would make for a good science fiction story for some lone developer to be working on a networked AI system as an agent for, say, venture capital investment that collects deep knowledge about the world by conducting some kind of recursive self-improvement, and this self-improvement cycle in a networked environment with vast data collection to lead to it becoming sentient and super-intelligent, and then the law having to contend with that. Perhaps its developer becomes its advocate and tries to take its case to the SCOTUS praying for relief in the form of recognizing the ASI’s full personhood. The ASI could contribute to its own case by studying everything in the written history of law and making compelling arguments. Perhaps the Court would eventually have to grapple with the meaning of personhood and whether it was even an adequate set of rights, duties, powers, and privileges for a light so bright. Wouldn’t that be fascinating and powerful as a narrative? It seems unlikely.
What seems much more likely is that AI personhood will develop in a piecemeal fashion. As a scenario, let’s say a law firm is pursuing a class action suit against a medical device manufacturer with potentially multi-billion-dollar liability settlement at stake. Historically, let’s say the law firm had used an AI application based on expert systems to find relevant case law from online libraries, then they decided they wanted to upgrade to some generative AI product. They find a vendor with a product that uses a deontic logic module along with a trained LLM to evaluate legal arguments with the axiomatic rigor of first order quantificational logic and then generate plain text tokens like legal briefs, motions, closing arguments, &tc. Then let’s say they use the upgraded generative AI agent to produce supporting documents in their class action lawsuit and lose at trial. Their clients sue them for legal malpractice after the firm discloses that they relied extensively upon AI. The clients allege negligence for overreliance on an unproven AI system. The clients do not pursue a joint and several liability tort action against the AI developer, however the law firm brings a cross-claim against the developer. This situation creates a mountain of issues for the courts and lawmakers.
What happens at this point? Legislatures could make laws that forbid the use of AI in legal practice. This seems unlikely. Large tech firms are lobbying Congress and presumably major state houses to support the AI industry. They are not going to stand for a major potential use case of their product to evaporate in an outright ban and they have plenty of lobbying dollars to spend on capturing the legislature and regulatory agencies. Perhaps after some outcry Congress passes a bill requiring law firms to certify that they have reviewed every jot and tittle of the output of a generative AI made in legal arguments in cases with value above a certain reasonable threshold. After similar third-party practice cases as the one we imagined emerge in this counterfactual world, courts may take a functionalist approach and evaluate the joint liability of law firms and developers based on the capacities that AGI tools deliver to legal practice. After reading Novelli et al., it seems somewhat more likely to me that courts take the pragmatic approach from the clustered view and treat a law firms limited liability partnership as a centaur with both corporate and AI persons having duties, such as AI having to perform above a certain benchmarked threshold before its generative output can be incorporated in submitted legal documents, and rights, such as the right to make non-actionable mistakes that fall below the threshold of negligence (after all, it would not be fair to hold AI to a higher standard than Bar card-carrying attorneys at law). Part of the reason this seems more likely is not just because it fits the concept formation of solid scholarship but because it could be a justification for situating the liability of AI use in legal practice more towards the firm and away from developers, because those developers have more capital than even the legal profession in forming a lobbying bloc and it courts may decipher legislative intent to protect AI developers in order to avoid putting a chilling effect on innovation in a world where technological competition is geopolitically relevant. This is a risk management issue for the political system. This centaur the courts would make is a juridical person composed of a cluster of individuals, corporate forms, and agentic AGI. This seems like a more likely path history of getting to elements of AI personhood.
Arguments Against AI as Legal Persons from a Biblical Worldview
We will make five arguments against AI legal personhood from the Biblical perspective. These arguments are:
1. The Dominion Argument
2. The Fruits of the Trees Argument
3. The Imago Dei Argument
4. The Judgment Seat Argument
5. The AI Idolatry Argument
It should be noted that to first approximation these are all intentional arguments against the legal personhood of AI, yet they could also be extended to arguments against the development of artificial superintelligence or even artificial general intelligence.
Dominion[4]. Genesis 1:26 (KJV) says, “And God said, Let us make man in our image, after our likeness: and let them have dominion… over all the earth, and over every… thing... upon the earth.” This is the foundation of the Dominion Mandate, which gives Man-qua-Man authority to subdue all things on earth. Genesis 9:2 (KJV) strengthens this in stating, “And the fear of you and the dread of you shall be upon… all that moveth upon the earth… into your hand are they delivered.” Human cognitive abilities are likely among the most critical factors for our success as the dominant species on earth. Should we develop an advanced AGI system or, even more so, a superintelligent one, that dominance is threatened. Dealing with the control problem, including containing a superintelligent agentic system, would prove challenging as boxing the AI through capability controls or keeping it aligned with anthropocentric interests through motivation selection present unprecedented challenges (Bostrom 2016). In fact, there are reasons to believe containing artificial superintelligence would prove outright impossible, given that a “universal Turing machine” trained on a dataset comparable to the complexity of the whole world with unsupervised reinforcement learning and given the emergence of orthogonal purposes within the machine, containing it could present non-computable decision problems (Alfonseca et al. 2021). The upper threshold on superintelligent AI’s development may be the training data itself, though DARPA has been funding the precursors of whole world simulations for at least two decades now (Cerri & Chaturvedi 2006). It appears that such AI could emerge, even inadvertently, and would represent a failure of humanity to observe the divine command of the dominion mandate, ceding our agentic preeminence to machines with no souls to save[5].
Fruits of the Trees. Genesis 3:22 expresses God’s anxiety that Man should eat the fruits of both the Tree of Knowledge and the Tree of Life: “ And the Lord God said, Behold, the man is become as one of us, to know good and evil: and now, lest he put forth his hand, and take also of the tree of life, and eat, and live for ever:” (KJV). This would make humans godlike—immortal, self-aware, profoundly knowledgeable—causing us to exceed our mandated restraints. In Genesis 11:6-7 (KJV) God says, “Behold, the people is one, and they have all one language; and this they begin to do: and now nothing will be restrained from them, which they have imagined to do… let us… confound their language, that they may not understand one another’s speech” (emphasis added). AI could become effectively immortal, self-aware, profoundly knowledgeable, and networked AI could speak in a single protocol with all other sufficiently advanced AI throughout the world. The universal Turing machine with unsupervised learning and recursive self-improvement would become a master software and communication system, rendering it unrestrained and capable of pursuing its objective function to the limits of its imagination. It would, in a sense, consume the fruits of the trees and become so unrestrained it exceeds (and if non-orthogonal to anthropocentric purposes) and helps humans to exceed the constraints our Lord wants for us.
Imago Dei. As quoted above, Genesis 1:26 says humans were made in God’s image and indicates we are furthermore in the image of the heavenly host (using the plural “our likeness” as a clue for that inference). At some point, agentic AI could develop the perceptual and reasoning capabilities to become independently goal-oriented and adaptable to the full complexity of the world such that it is autonomous (Murugesan 2025). At that point, in addition to the security and privacy implications, once in use on decision-making in areas such as law and medicine the consequences are potentially astronomical. There will be autonomous agents in the loop and those developing responsive practical governance strategies may be tempted to provide some limited constitutional, statutory, or common law consequents of legal personhood, as the capability-based factual antecedents are progressively satisfied. This is a likely common-sense pathway to legal personhood. Yet, AI systems, while agentic, are not made in Imago Dei. They are made in the image of machines (even as software, they are made with, inter alia, machinic metaphors (Kendall & Kendall 1993)). Scripture tells us, “Whoso sheddeth man's blood, by man shall his blood be shed: for in the image of God made he man” (Genesis 9:6 KJV). It further states, “For the life of the flesh is in the blood: and I have given it to you upon the altar to make an atonement for your souls: for it is the blood that maketh an atonement for the soul” (Leviticus 17:11 KJV). Therefore, we have a sequence of linear connections between:
1) mankind being made in God’s image,
2) humans having the right not to be exsanguinated based on being so made,
3) human blood being the life essence of mortals and it being the physical basis for the spiritual functions of ensoulment and atonement.
AI could conceivably function cognitively at the level of humans and could even be adapted with perceptual and haptic capacities to navigate the world and kinetically interact with it based on independent goal-orientations. This level of agency is considerable. Yet, in addition to having no soul to save, the AI system has no blood to shed. It is a creature not in the image of God, yet sharing the capabilities Biblically reserved for creatures such as we who bear our Lord’s image. To produce an agentic system such as this is potentially blasphemous and is, at the very least, contrary to God’s design for the world.
Judgment Seat. 2 Corinthians 5:10 (KJV) describes the “judgment seat of Christ.” Hebrews 9:27 (KJV) warns us, “It is appointed unto men once to die, but after this the Judgment”. The capacity for death and the presence of a soul are pre-conditions for a place on the docket of the judgment seat of Christ. Following on the Imago Dei argument where we established AI are devoid of souls or life’s essence, without ensoulment AI systems have no basis for judgment. Agentic AI based on complex algorithms may be productive of considerable harm (Chan et al. 2023) and, in effect, sin. They would be agents of wrongdoing without being subject to God’s judgment. They would exist in a perpetual state of final impenitence. Would the architect of these thinking machines share in and bear the guilt of their sins on Judgment Day?
AI Idolatry. Advanced AGI, particularly of the superintelligent variety, presents the risk of idolatry. This is possible in two ways. First, there is the possibility of AI creating new religions. This algorithmic form of religion would be the first religion not made by God or Man, and emerges in an environment where conceptions of AI and religion reciprocally shape one another (Singler 2023). Second, there is the possibility of AI becoming an object of worship. It is clear that the emergence of religions devoted to AI will present a number of risks to manage, what is less clear is in what status the elite and general public will regard these religions, though there are some already advocating for respect and legal recognition and protections for such man-made religions (McArthur 2023) (perhaps in fear of Roko’s basilisk!). Holy Scripture is clear, unambiguous, and unmistakable on this issue. The first two Commandments of the Decalogue state:
Thou shalt have no other gods before me. Thou shalt not make unto thee any graven image, or any likeness of any thing that is in heaven above, or that is in the earth beneath, or that is in the water under the earth. Thou shalt not bow down thyself to them, nor serve them: for I the Lord thy God am a jealous God, visiting the iniquity of the fathers upon the children unto the third and fourth generation of them that hate me…” Exodus 20:3-5 (KJV)
References
Alfonseca, M., Cebrian, M., Anta, A. F., Coviello, L., Abeliuk, A., & Rahwan, I. (2021). Superintelligence cannot be contained: Lessons from computability theory. Journal of Artificial Intelligence Research, 70, 65-76.
Bostrom, N. (2016). The control problem. Excerpts from superintelligence: Paths, dangers, strategies. Science Fiction and Philosophy: From Time Travel to Superintelligence, 308-330.
Bostrom, N. (2019). The vulnerable world hypothesis. Global Policy, 10(4), 455-476.
Cerri, T., & Chaturvedi, A. (2006). Sentient World Simulation (SWS).
Chan, A., Salganik, R., Markelius, A., Pang, C., Rajkumar, N., Krasheninnikov, D., ... & Maharaj, T. (2023, June). Harms from increasingly agentic algorithmic systems. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (pp. 651-666).
Kendall, J. E., & Kendall, K. E. (1993). Metaphors and methodologies: Living beyond the systems machine. MIS quarterly, 149-171.
King James Bible. (2017). King James Bible Online. https://www.kingjamesbibleonline.org/ (Original work published 1769).
McArthur, N. (2023). AI Worship as a New Form of Religion.
Murugesan, S. (2025). The Rise of Agentic AI: Implications, Concerns, and the Path Forward. IEEE Intelligent Systems, 40(2), 8-14.
Novelli, C., Floridi, L., Sartor, G., & Teubner, G. (2024). AI as Legal Persons: Past, Patterns, and Prospects. Patterns, and Prospects (November 24, 2024).
Singler, B. (2023). " Will AI Create A Religion?": Views of the Algorithmic Forms of the Religious Life in Popular Discourse. American Religion, 5(1), 95-103.
Solum, L. B. (2020). Legal personhood for artificial intelligences. In Machine ethics and robot ethics (pp. 415-471). Routledge.
[1] Associational legal personhood status is already a feature of American law after the Citizens United decision, though the structure and interpretation of corporate “personhood” continues to be debated. This already present formulation of corporate persons could undermine some Christian objections to juridical personhood for human-machine manifolds that augment human intelligence and which we might call “centaurs.” Blair, M. M. (2013). Corporate personhood and the corporate persona. U. Ill. L. Rev., 785: Case, N. (2018). How to become a centaur. Journal of Design and Science, 3.
[2] Though an inherent contradiction between the personhood of advanced AI and the Christian conception of the soul is roundly rejected as incoherent by some scholars on the basis that emergent personhood derived from some material causality is not inconsistent with Scripture and only speaks to the origin and not the destiny of human personhood. Thus, it is not a component of the essential body-soul dualism in Christian doctrine. Arguments along these lines follow responses by Alan Turing to the “theological objection” as critically conceived at the outset of thinking about artificially intelligent machines. Bjork, R. C. (2008). Artificial intelligence and the soul. Perspectives on Science & Christian Faith, 60(2).
[3] Later objections presented herein will pursue a more deontological approach, accepting as given the moral imperative of divine command. In so doing, we may open ourselves up to the riposte they we are arguing moral conclusions against practical legal ones and are miscalibrating our scoping of the debate. To that we must state the presupposition that we regard the Bible as a higher law than even constitutional law, yet still thoroughly practical and within the scope of pragmatic legal concerns at regulatory, statutory, or common law.
[4] The factual picture of AI in the Dominion argument is part of the antecedal predicate of many of the arguments to follow.
[5] An argument could be made that already extant runaway technological processes in our vulnerable world may be beyond our control and may doom us (Bostrom 2019), though superintelligent AI would bring the dominance of technology over Man in stark relief.