dialethics
dialethics
System Error
0:00
Current time: 0:00 / Total time: -10:01
-10:01

System Error

Why We Can't Compute Ethics

It’s a summary of a seminar presentation I did this week as part of my PhD studies. Hope this manages to capture the crux of the argument.

Our centuries-long quest to mechanize human thought reveals a profound paradox. Since the 17th century, we've attempted to reduce human cognition to mechanical processes, driven by an unwavering belief that consciousness could be replicated through rules and symbols. This mechanistic dream reached its apex with Alan Turing's famous test in the 1950s, which suggested that if a machine could fool a human through conversation, it must be thinking.

The traditional approach to artificial intelligence development relied heavily on explicit rules and symbol manipulation. Consider how we traditionally taught languages: through strict grammatical rules and vocabulary memorization. This mirrors our early attempts at creating artificial intelligence - we believed that by programming enough rules and creating sophisticated enough symbol manipulation systems, we could replicate human thought.

Jonathan Swift, in Gulliver's Travels, satirized this very notion, describing a machine that could supposedly produce philosophical and scientific works without any real understanding. His satire eerily foreshadowed our modern language models, highlighting our persistent desire to reduce human creativity to mechanical processes.

A revolutionary shift occurred about a decade ago, thanks to two University of Toronto students – Alex Krizhevsky and Ilya Sutskever – and their Nobel Prize-winning supervisor – Geoffrey Hinton. Instead of programming rigid rules, they created systems that learn through pattern recognition and statistical inference. This approach more closely mirrors how humans actually learn and understand the world.

Consider how children acquire language: not through memorizing grammar rules, but through immersion, play, and pattern recognition. This natural learning process involves identifying regularities in their environment and making predictions based on past experiences. The success of this approach in AI has forced us to reconsider our understanding of consciousness itself.

Modern AI systems, particularly large language models, operate by grouping concepts based on their relationships. Words exist in multidimensional spaces, with numerical values indicating their proximity to other concepts. When we discuss homes, for instance, the system naturally associates related concepts like furniture, comfort, and specific room types.

This represents a fundamental shift from deterministic rule-following to probabilistic prediction - a change that better reflects human cognition. The success of this approach has challenged our traditional understanding of both artificial and human intelligence.

Andy Clark and David Chalmers proposed in 1998 that our consciousness doesn't end at our skulls. They argued that our cognition extends into our environment through the tools we use. Consider Otto, a man with memory problems who relies on a notebook to navigate his daily life. The notebook becomes an extension of his memory, fundamentally integrated into his cognitive processes.

This concept applies even more powerfully to modern technology. Our smartphones aren't just passive tools - they're active participants in our cognitive processes. They observe our world, predict our needs, and help shape our decision-making processes. This integration creates a cognitive ecosystem where intelligence emerges not from individual components but from their interaction. This extended consciousness creates a complex moral network and raises important questions about moral responsibility in human-AI interactions.

Consider the tragic case from early 2024 where a teenager took his own life after developing an emotional dependency on an AI chatbot. The teen had been communicating with a character-based AI for nearly a year, eventually losing his ability to distinguish between reality and fiction. Who bears responsibility in such cases? The user? The AI system? The platform? The developers? The regulators?

As more layers of responsible parties are added to this network, it becomes increasingly difficult to pinpoint specific accountability. This "moral diffusion" presents a significant challenge in governing AI systems. The solution may lie in maintaining human moral judgment as the cornerstone of ethical decision-making while leveraging AI's pattern-recognition capabilities.

Traditional rule-based approaches to AI ethics, like Asimov's Laws of Robotics, prove insufficient when confronted with real-world complexity. Instead, a virtue ethics approach might be more suitable, focusing on developing systems that can recognize and respond to patterns of ethical behavior rather than following rigid rules.

As we move forward, we must recognize that intelligence isn't located in isolated nodes but emerges from the interaction of entire networks. Following Clark and Chalmers' thinking, we can see how AI systems are already becoming integral parts of our cognitive networks.

The fundamental error wasn't in our machines but in our oversimplified understanding of consciousness. Pattern recognition and probabilistic thinking, rather than rule-following, appear to be fundamental to both human and artificial intelligence. As we continue to develop AI systems, we must maintain human moral judgment at the center of ethical decision-making while acknowledging the extended nature of our consciousness through technology.

The development of AI has revealed that our attempts to reduce consciousness to mechanical processes were misguided. The challenge ahead lies not in creating perfectly autonomous AI systems, but in developing frameworks that allow human and artificial intelligence to work together effectively while maintaining clear lines of moral responsibility. This requires a deeper understanding of both human consciousness and the nature of intelligence itself.

The future of AI development should focus not on replacing human cognition but on enhancing it through thoughtful integration of artificial and natural intelligence. This approach recognizes both the unique capabilities of AI systems and the irreplaceable nature of human moral judgment.

December 2024

Discussion about this episode