Humanity in Our Machines
A very quick & naïve response to Shannon Vallor's essay 'The Danger Of Superhuman AI Is Not What You Think' in Noēma IV (May, 2024). Forgive my shallowness!
Shannon Vallor's essay offers a critique of the rhetoric surrounding artificial intelligence. While Vallor makes several compelling points, I believe her argument would benefit from a consideration of the interplay between human and artificial intelligence. Though, I think, it’s not like she’s not aware of it.
Rhetoric of "Superhuman" AI
Vallor rightly critiques the hyperbolic language of "superhuman" AI, arguing that it implicitly devalues human intelligence and agency.
How, I asked, does an AI system without the human capacity for conscious self-reflection, empathy or moral intelligence become superhuman merely by being a faster problem-solver? Aren’t we more than that? – Shannon Vallor
This rhetoric does indeed risk reducing human cognition and experience to a narrow set of task-completion metrics. However, this reductionist view is neither new nor shocking – it's a new iteration of arguments we've seen before, from behaviorism to today's tech-optimism. The "reality distortion field" that often surrounds technological progress is clearly at play here, oversimplifying complex issues into problems that only superior technology can solve.
Redefining Intelligence
A key issue Vallor identifies is the shifting definition of artificial general intelligence (AGI) from human-like consciousness to economic task performance. This redefinition does indeed risk reducing our conception of intelligence to a set of narrowly defined, economically valuable skills. However, I would argue that this shift reflects not just corporate agendas, but also our evolving understanding of intelligence itself.
[R]esearchers like Geoffrey Hinton and Yoshua Bengio are now telling us a different story. A self-aware machine that is “indistinguishable from the human mind” is no longer the defining ambition for AGI. A machine that matches or outperforms us on a vast array of economically valuable tasks is the latest target. – Shannon Vallor
We must, however, be open to the possibility that artificial intelligence may develop along fundamentally different lines than human intelligence, potentially surpassing us in some domains while remaining limited in others. This evolutionary divergence is not necessarily problematic – technological systems need not mimic human cognition to be valuable, if not superior.
The Nature of Consciousness and Experience
Vallor emphasizes the lack of consciousness and sentience in current AI systems, arguing that this fundamental limitation makes comparisons to human intelligence misguided. While this is a crucial point, we should be cautious about assuming that consciousness and sentience are necessary prerequisites for all forms of intelligence or capability.
Once you accept that devastating reduction of the scope of our humanity, the production of an equivalently versatile task-machine with “superhuman” task performance doesn’t seem so far-fetched; the notion is almost mundane. – Shannon Vallor
As we continue to debate the nature of consciousness, we must remain open to the possibility that artificial systems could develop forms of intelligence or problem-solving capabilities that do not require consciousness as we understand it. This does not negate the unique value of human consciousness and experience, but it does suggest that we should be careful about using these qualities as the sole benchmark for evaluating AI capabilities.
The Alignment Problem
While Vallor focuses primarily on the rhetorical and ideological dangers of "superhuman" AI, it's important to also consider the very real technical challenges of ensuring that advanced AI systems remain aligned with human values and goals. The current approach of optimizing AI systems for fixed objectives can lead to unintended and potentially catastrophic consequences.
A more nuanced approach would involve developing AI systems that are inherently uncertain about human preferences and values, leading to more cautious and beneficial behavior. This aligns with Vallor's call for a more human-centric approach to AI development while acknowledging the complexity and diversity of human values.
Reclaiming Human Agency
Vallor's vision of reclaiming human agency and reimagining various sectors of society with a focus on humane values is compelling. She rightly points out that the current focus on mechanical optimization and efficiency often comes at the cost of human well-being and fulfillment.
In many countries, the former ideal of a humane process of moral and intellectual formation has been reduced to optimized routines of training young people to mindlessly generate expected test-answer tokens from test-question prompts. – Shannon Vallor
However, I would argue that this reclamation of human agency need not be positioned in opposition to AI development. Instead, we should strive to harness the power of AI to support and enhance human agency. And I know Vallor would not argue against this point.
Embracing Complexity
One of the strengths of Vallor's argument is her recognition of the complexity of human intelligence and experience. However, I believe we need to extend this embrace of complexity to our understanding of artificial intelligence as well.
Rather than framing the debate as a simple dichotomy between human and artificial intelligence, we should recognize that the future is likely to involve a complex interplay between human cognition, artificial systems, and hybrid forms of intelligence that we may not yet be able to imagine. This more nuanced view allows us to appreciate the unique strengths of both human and artificial intelligence while also exploring the potential for synergistic relationships between the two.
Towards a Humane AI Future
The path forward lies not in rejecting or fearing AI advancement, but in thoughtfully integrating artificial intelligence into a broader vision of human flourishing. Ultimately, the goal should be to develop AI that enhances rather than diminishes our humanity – tools that empower us to be more fully human, not less.
I am not naïve, thought, and I do recognize that for most tech-optimists that is neither a goal, nor a future they even believe in. If humans are to be enhanced, it’s not all humans that would be in line to benefit from it.
We are in danger of sleepwalking our way into a future where all we do is fail more miserably at being those machines ourselves. – Shannon Vallor
Hence, Vallor's essay serves as a reminder to critically examine the rhetoric and ideology surrounding AI development. Her call to reclaim and revalue uniquely human forms of intelligence and creativity is both timely and necessary.
October 2024