Shannon Vallor and Tillmann Vierkant offer a legitimately good argument that existing discourse on AI ethics has focused too much on issues of (epistemic) transparency, bias, and (lack of moral) control. These concerns, while important, may be missing a more fundamental problem: the vulnerability gap between human moral agents and AI systems (keep in mind, key premise here: AI systems cannot be moral agents themselves).
The responsibility gap in AI ethics is the difficulty in assigning moral responsibility for the actions of autonomous systems that operate with minimal human oversight. However, Vallor and Vierkant argue that the typical framing of this problem around epistemic opacity (our inability to fully understand AI decision-making) and lack of human control is misguided. These issues are not unique to AI but are in fact common in human decision-making as well, as evidenced by findings from cognitive science.
Instead, Vallor and Vierkant propose that the true responsibility gap stems from an asymmetry of vulnerability between humans and AI systems. Human moral responsibility, they argue, is grounded in our mutual vulnerability - our ability to affect and be affected by each other emotionally and socially through our actions. AI systems, lacking sentience and emotional capacities, cannot participate in this web of vulnerability that underpins human moral relations.
If the vulnerability gap stems from the way AI systems fragment and distribute human agency, perhaps we can design systems and organizational structures that better preserve coherent spheres of human moral responsibility? This might involve limiting automation in certain domains, creating clearer chains of accountability, or developing new interfaces that make the human moral stakes of AI-mediated decisions more salient.
Another important consideration is how to cultivate a sense of moral responsibility in the humans who design, deploy, and oversee AI systems, even if the systems themselves cannot be moral agents. The "agency cultivation" framework Vallor and Vierkant propose could potentially be applied here - developing practices and institutions that make AI developers and operators more acutely aware of and answerable to the moral implications of their work.
It's also worth considering whether there are ways to make AI systems more "vulnerable" in a morally relevant sense, even if they can't experience emotions like humans do. Perhaps systems could be designed with clearer feedback mechanisms that make their "reputation" or "trustworthiness" dependent on adhering to ethical principles, creating a kind of functional analogue to human moral vulnerability.
I am left with a few questions that merit further exploration:
How can we design AI systems and human-AI interfaces to better preserve coherent spheres of human moral responsibility?
What new social practices or institutions might help cultivate a sense of moral answerability in the humans behind AI systems?
Are there ways to create functional analogues to moral vulnerability in AI systems, even if they can't experience human-like emotions?
How does the vulnerability gap interact with other ethical concerns around AI, such as fairness, transparency, and privacy?
What are the implications of the vulnerability gap for different domains of AI application (e.g. healthcare, criminal justice, finance)?
How might the vulnerability gap evolve as AI systems become more sophisticated and potentially develop greater capacities for social interaction and apparent emotional intelligence?
A potential criticism of Vallor and Vierkant's argument is that it may be overly anthropocentric, assuming that moral responsibility must be grounded in human-like emotional vulnerabilities. An alternative view might argue that as AI systems become more integral to our social fabric, we may need to expand our conception of moral responsibility to encompass non-human agents in novel ways.
October 2024