TIL #4: Can We Teach (AI) Virtuous Behavior at All?
4th thing I learned: McDowell's conditions for virtuous behavior. I’m thinking: can we, following Turing, at least imitate it in artificial systems?
John McDowell sets conditions for what makes virtuous behavior possible. I’m reading it and asking, by extension, whether we could create artificially virtuous beings by meeting said conditions. Think, Turing’s test.
The Knowledge Paradox
Rather than viewing virtue as following moral rules, McDowell argues that virtue is a form of knowledge–but not the kind we typically imagine. It's more like a perceptual capacity that allows one to recognize what situations require of us. This raises an interesting challenge:
We can't reduce virtue to a set of programmable rules
Virtuous behavior requires a holistic understanding of contexts
The knowledge involved can't be broken down into neat algorithms
Beyond Rule-Following
What makes McDowell’s theory particularly relevant for AI is his critique of rule-following. He argues that even seemingly straightforward rule-following (like continuing a number sequence) depends on shared forms of life - our common ways of seeing similarities and making judgments.
This has profound implications for ethics:
We can't program virtue through explicit rules
Virtuous behavior requires participation in human forms of life
Pure computational approaches may miss essential elements of moral judgment
The Learning Challenge
McDowell suggests that becoming virtuous involves developing a special kind of sensitivity rather than memorizing principles. If we stretch this all the way to AI, this means:
Simple training on ethical datasets won't suffice
We need to consider how to develop genuine moral sensitivity
The challenge may be more fundamental than technical
Why This Matters
The GOFAI Challenge
McDowell's argument that virtue cannot be reduced to formulable rules poses a challenge to (symbolic) GOFAI's rule-based approach to ethical AI. Just as human virtue cannot be captured in a set of explicit principles, trying to program ethical behavior through rule-based systems may be fundamentally misguided.
The Connectionist Opening
However, connectionist approaches might offer a more promising path:
Learning from Experience: Neural networks learn from patterns and examples rather than explicit rules, similar to McDowell's description of how virtue is acquired through developing perceptual sensitivity
Context Sensitivity: Connectionist systems can develop nuanced responses to situations that aren't easily captured in rules, potentially matching McDowell's emphasis on context-dependent judgment
Holistic Processing: The distributed representations in neural networks might better capture the holistic nature of moral perception that McDowell describes
The Deeper Challenge
Yet McDowell's argument suggests limits even for connectionism:
The shared forms of life that ground human moral understanding may not be accessible to artificial systems in principle
The kind of sensitivity required for true virtue might depend on embodied participation in human practices that goes beyond pattern recognition
While connectionist approaches might better approximate aspects of moral learning, McDowell's analysis indicates that genuine virtue may require forms of engagement with the world that are not yet (or, according to Vallor, probably cannot be) replicated artificially. Though LeCun could argue that we’re getting there.
McDowell's insights suggest we should focus less on programming explicit ethical rules and more on understanding how to develop genuine sensitivity (close to Vallor’s vulnerability gap) to moral situations – while remaining aware of the fundamental challenges this poses.
November 2024