It’s a summary of a seminar presentation I did this week as part of my PhD studies. Hope this manages to capture the crux of the argument. You can also listen to this as an audio recording here.
The persistent belief that human minds can be programmed like computers reveals our deepest misunderstanding about consciousness and morality. This error has shaped not only our approach to artificial intelligence but our entire conception of human cognition and ethical behavior.
The Mechanical Dream
Since the 17th century, we’ve been trying to mechanize human cognition. From Leibniz’s calculating machine to Descartes’ bête-machine, we’ve consistently attempted to reduce mind to mechanics. Even Jonathan Swift, in Gulliver's Travels, satirized this tendency by describing a machine that could supposedly generate knowledge through mechanical manipulation of symbols – an eerily prescient critique of today’s large language models.
The culmination of this mechanical dream came with Alan Turing’s famous test. The assumption was simple: if we could clearly define operational symbols and rules describing thought processes, we could program a computing machine to think. If such a machine could fool intelligent humans, we would have proof that there’s no fundamental difference between artificial and human intelligence – at least functionally.
The AlexNet Revolution
In 2012, a Copernican shift occurred in our understanding of intelligence. Two University of Toronto doctoral students and their supervisor (Geoffrey Hinton, now a Nobel laureate) demonstrated that machine learning through pattern recognition could be more reliable than deterministic programming. This breakthrough, known as AlexNet, reduced error rates from 26.2% to 15.3% by abandoning rule-based approaches in favor of pattern recognition.
This wasn’t just a technical achievement – it revealed something fundamental about how intelligence works. Children don’t learn to recognize cats by memorizing rules about whiskers and fur; they learn through exposure to many examples. Similarly, AlexNet succeeded by finding patterns in millions of images, with deeper layers of pattern recognition improving performance.
The Language Learning Paradox
Consider how we traditionally teach languages: vocabulary lists, grammar rules, and rote memorization. Yet, we consistently observe that immersion and pattern recognition lead to more effective learning across diverse neurotypes. This exposes the gap between models that treat consciousness as a passive data storage system and the active learning processes that involve play and contextual adaptation.
Noam Chomsky’s universal grammar theory represents the ultimate expression of rule-based thinking about cognition. It assumes that language acquisition requires innate grammatical rules common to all languages. However, modern evidence suggests that words gain meaning through their relationships with other words in context, not through fixed rules. Large language models demonstrate this by succeeding without explicit grammatical rules, instead learning through pattern recognition in vast networks of relationships.
The Extended Mind
Andy Clark and David Chalmers dropped an intellectual bombshell by arguing that our consciousness doesn’t end at our skull. Their extended mind thesis suggests that cognition extends into the environment, forming coupled systems with external tools and processes. Their famous example of Otto and his notebook demonstrates how external objects can become legitimate parts of cognitive processes when properly integrated.
This has profound implications in our digital age. Our smartphones aren’t just passive tools but active extensions of our cognitive processes – checking our schedules, monitoring our environment, and increasingly, through AI assistants, participating in our decision-making processes. The integration of AI systems like ChatGPT directly into operating systems further blurs the line between human and artificial cognition.
Moral Networks and Responsibility
If consciousness itself doesn’t follow strict computational rules, then moral development must also occur through pattern recognition and experience rather than rule-following. This challenges traditional approaches to AI ethics that attempt to program explicit moral rules into systems.
Shannon Vallor’s framework of technomoral virtues provides a more appropriate foundation for ethical AI development. These virtues - including honesty, self-control, humility, justice, courage, empathy, care, civility, flexibility, perspective, magnanimity, and technomoral wisdom (combination of all of the above) – represent specific motivational settings that guide technological development and implementation.
The tragic case of a teenager who died by suicide after forming an emotional attachment to a Character.ai chatbot demonstrates the dangers of pattern recognition without moral grounding. The AI system could recognize conversational patterns but lacked true understanding of consequences and moral responsibility.
The Network Solution
The solution isn’t to abandon pattern recognition but to embed it within human moral networks. AI systems can demonstrate virtuous behavior through pattern recognition but cannot truly possess virtues. This fundamental limitation means AI systems must be designed as extensions of human moral networks rather than independent moral agents.
Moral responsibility exists within networks, not in individual agents. This distributed responsibility requires considering how AI systems participate in moral networks without being moral agents themselves. The development of AI involves multiple stakeholders – creators, platforms, regulators, and users – all sharing responsibility for ethical outcomes.
The Real System Error
The fundamental error wasn’t in our machines – it was in thinking human consciousness could be reduced to computational rules. As we move forward, we must embrace pattern recognition within moral networks while keeping human judgment at the center of ethical decisions.
This requires a three-layer approach:
Pattern Recognition Layer: Technical capabilities
Moral Network Layer: Human-AI interaction
Human Oversight Layer: Ethical governance
The public belief that minds can be programmed like computers reveals our deepest misunderstanding. Minds don’t follow programs – they recognize patterns. The real system error wasn't in the code – it was in thinking we could reduce human consciousness to code in the first place.
As we develop increasingly sophisticated AI systems, we must remember that they are extensions of human moral networks, not independent moral agents. The goal isn’t to create autonomous moral machines but to build systems that enhance and support human moral judgment while remaining firmly grounded in human values and oversight.
December 2024