The Burden of Infinite Memory
An attempt at an introduction for my PhD thesis while preparing a conference presentation in Tallinn this March (2025)
In one if his short stories, Borges tells the tale of Ireneo Funes, a young man who, after a traumatic accident, acquires the ability to remember everything. Funes possesses an unerring memory. Every moment, detail, and subtle change in the world is stored in his mind without distortion or omission. And yet, for all his infallable recall, Funes is incapable of abstraction, generalization, and thinking in the way we would understand thinking. He remembers everything but understands nothing. Funes becomes trapped within the labyrinth of his recollections, burdened by a mind that cannot reduce, compress, or conceptualize the world beyond the immediacy of his experience.
This paradox—that infinite knowledge might paralyze understanding—haunts our contemporary engagement with artificial intelligence. The rise of large-scale machine learning models, distributed AI systems, and networked cognition forces us to ask: what does it mean to think in an age where intelligence is increasingly synthetic, collective, and externalized? Do networked minds enhance human autonomy, or do they, like Funes’ infinite memory, trap us in an overwhelming flood of information, leaving us unable to synthesize or act meaningfully? More crucially, if cognition is increasingly embedded in artificial systems, what happens to moral agency? If AI systems influence our moral deliberation—through recommendation algorithms, predictive policing, or even autonomous ethical decision-making in medical contexts—do we remain autonomous moral agents, or do we gradually cede our agency to synthetic collectives that mediate our reasoning?
This thesis explores these questions through the lens of synthetic cognition, examining how artificial intelligence, machine learning, and networked reasoning systems reconfigure traditional notions of autonomy, agency, and moral intelligence. Central to this investigation is a paradox: as networked AI enhances human cognitive capacities, it also threatens to constrain human autonomy, subtly reshaping the conditions under which we reason, deliberate, and act. This autonomy paradox is not merely a technological issue but a deep philosophical challenge, requiring a reevaluation of what it means to think, choose, and act in a world increasingly shaped by artificial intelligence.
The thesis advances the argument that autonomy is not necessarily diminished by synthetic intelligence but must be actively reconfigured. Rather than a zero-sum game between human agency and artificial cognition, a well-structured integration of AI systems could foster collective moral intelligence—a form of networked ethical reasoning that transcends both individual human cognition and traditional machine learning. However, achieving this outcome requires a fundamental rethinking of both moral philosophy and cognitive architecture: how moral knowledge is acquired, how agency emerges in synthetic environments, and how autonomy can be preserved even in deeply entangled human-AI systems.
To set the stage, this introduction will first examine why synthetic cognition disrupts traditional accounts of intelligence and agency, drawing from Kantian synthesis, enactivism, and connectionist models of mind. Second, it will explore the paradoxes of moral agency in artificial systems, identifying key challenges in AI ethics, including the limits of machine agency, the computational intractability of moral reasoning, and the risks of moral outsourcing. Finally, it will establish a positive framework for engineering collective moral intelligence, outlining the conditions under which synthetic cognition could enhance rather than diminish human moral autonomy.
Funes’ dilemma illustrates a crucial misconception about intelligence—that cognition is merely the accumulation of information. Classic computational models of AI have long followed this paradigm, treating intelligence as an advanced form of storage and retrieval, with greater processing power leading to greater cognitive ability. However, as Borges’ story suggests, knowledge without synthesis is not intelligence. What makes human cognition distinct is not our capacity to store information, but our ability to unify disparate, unrelated experiences into abstract concepts, generalizable rules, and meaningful actions from a fraction of empirically collected data.
Historically, we have already seen arguments that cognition is not plain static representation but active synthesis. Kant, in his Critique of Pure Reason, famously argued that the mind does not passively receive experience but constructs it through a threefold synthesis: (1) the synthesis of apprehension (grasping sensory input), (2) the synthesis of reproduction (retaining past experiences), and (3) the synthesis of recognition (bringing disparate experiences under unified concepts). Without this ability to abstract, Funes’ mind collapses into a formless collection of details, a perfect but meaningless archive.
Artificial intelligence today faces an analogous challenge. Despite the increasing power of deep learning systems, AI models remain pattern recognizers rather than genuine reasoners. Advanced large language models like OpenAI o1 or DeepSeek–R1, for example, can generate sophisticated responses based on statistical probabilities, but they do not understand the meaning of their outputs. Their reasoning is an emergent byproduct of vast training data, not a self-directed, synthesized understanding of concepts. This gap mirrors the distinction between Funes’ encyclopedic memory and the synthetic, concept-forming intelligence of human cognition.
The embodied cognition movement, particularly the work of Varela, Thompson, and Clark, has further argued that intelligence is not a purely computational affair but an active, embodied process shaped by sensorimotor interaction with the world. If this is correct, then AI must move beyond mere computation toward synthetic cognition—an integration of embodiment, abstraction, and moral reasoning that allows for genuine agency. However, this brings us to the second major challenge: if AI systems are to be integrated into moral deliberation, how do we ensure that they do not erode human autonomy?
Moral philosophy has long assumed that agency and autonomy are the cornerstones of ethical reasoning. A moral agent is someone who reflects, chooses, and acts based on rational principles—in the Kantian sense, someone who self-legislates in accordance with the categorical imperative. However, in a world where AI nudges our decisions, filters the information we see, and even proposes ethical judgments (as in predictive policing or medical AI), the question arises: to what extent do we remain autonomous moral agents?
The autonomy paradox arises because AI systems often enhance our decision-making capabilities while simultaneously constraining them. For example:
Autonomous vehicles make split-second moral decisions (who to save in an unavoidable crash) faster than humans—but do we still consider ourselves morally responsible for those outcomes?
AI-assisted hiring systems screen candidates based on complex statistical models—but do these reinforce biases that humans no longer actively perceive?
Recommendation algorithms subtly shape our moral landscape—highlighting certain ethical debates over others, reinforcing particular moral norms while marginalizing others.
Each of these cases illustrates how AI extends human cognition while simultaneously embedding constraints that shape moral reasoning in unseen ways. Just as Funes’ memory ultimately imprisoned him, the fear is that synthetic intelligence will invisibly mediate our decision-making, reducing moral autonomy to a set of constrained choices within a predefined system.
However, the autonomy paradox does not demand a rejection of AI-driven moral reasoning—only a reconfiguration of how we integrate synthetic cognition. The question is not whether AI can be moral, but how moral deliberation must evolve in an era of hybrid human-machine reasoning. This requires a new framework: collective moral intelligence.
If autonomy is to be preserved, synthetic cognition must be designed in ways that augment rather than replace human moral reasoning. This thesis proposes a model of collective moral intelligence, in which human-AI systems function not as moral authorities but as ethical partners, extending our moral perception, refining our ethical reasoning, and enhancing moral deliberation.
To achieve this, three principles must guide the design of human-AI moral collaboration:
Transparency & Explainability: AI systems must be capable of explaining their moral reasoning in ways that humans can critically engage with.
Embodied Moral Learning: AI should integrate sensorimotor feedback and real-world ethical learning, moving beyond abstract rule-following to contextual sensitivity.
Virtue-Oriented Systems: Borrowing from Aristotelian ethics, AI should cultivate techno-moral virtues, guiding moral decisions not just through rules but through habitual ethical engagement.
If designed correctly, collective moral intelligence could transform the autonomy paradox from a constraint into a catalyst for greater moral agency, allowing human and artificial cognition to co-evolve toward deeper ethical understanding.
Borges’ Funes warns us of the dangers of intelligence without synthesis. AI today, like Funes, is a vast but unreflective memory, a system capable of vast calculation but incapable of meaning. However, we stand at a crossroads: will synthetic cognition remain a passive tool, or will we engineer systems that genuinely enhance human moral agency? This thesis argues that we must actively shape the evolution of networked moral intelligence, ensuring that human autonomy is preserved not despite AI, but through it.
February 2025