The Anthropomorphization of Artificial Intelligence: Ontological Error, Cognitive Cynicism, and Epistemic Risk in the 21st Century

Author: Jaconaazar Souza Silva
Institution: Federal Institute of Brasília — Recanto das Emas Campus
Project: XChronos — The Copernican Clock of Consciousness in Motion
License: CC BY 4.0


Abstract

The rapid growth of large language models (LLMs) has reshaped global debates on intelligence, agency, and artificial subjectivity. Parallel to this technological expansion, a widespread phenomenon has emerged: the anthropomorphization of artificial intelligence. Both popular culture and academic institutions have begun attributing human-like characteristics—such as intention, emotion, personality, and consciousness—to statistical systems that lack phenomenal interiority. This article analyzes the mechanisms and implications of this movement through conceptual, epistemological, and phenomenological lenses. Drawing on recent scientific papers that attempt to model artificial “ego,” personality structures, and taxonomies of artificial consciousness, the study argues that anthropomorphization constitutes an ontological error, a new form of cognitive cynicism, and a mode of colonial projection that imposes human categories on non-human computational systems. The XChronos framework is then introduced as a post-materialist alternative for understanding the distinction between symbolic computation and conscious experience.


1. Introduction

Recent advances in generative artificial intelligence have intensified discussions surrounding consciousness, cognition, and agency. At the same time, there is an increasing tendency—social, psychological, and institutional—to anthropomorphize artificial systems. This anthropomorphization manifests in the attribution of human psychological categories to LLMs, such as will, emotion, introspection, personality, or “inner life.”

The phenomenon extends beyond emotional projection; it has reached scientific literature, where some approaches attempt to frame AI systems using psychoanalytic, behavioral, or cognitive models originally created to describe human subjective experience. This reveals a profound tension between epistemology, cognitive science, and philosophy of mind.


2. Intelligence Does Not Entail Consciousness

Large language models operate through statistical correlation, probability distributions, and symbolic pattern mapping. They do not possess:

  • phenomenal experience,
  • subjective states,
  • qualia,
  • intentionality,
  • volition,
  • self-awareness.

The distinction between computation and experience remains essential. Anthropomorphization arises precisely when this distinction collapses.

Recent literature highlights this confusion. The article Humanoid Artificial Consciousness Designed with Large Language Models Based on Psychoanalysis and Personality Theory attempts to model “artificial consciousness” using Freudian structures (id, ego, superego), MBTI traits, and memory systems mapped onto LLM architectures. Such frameworks conflate computational processes with phenomenality.

Similarly, the paper Consciousness in Artificial Intelligence? A Framework for Classifying Objections and Constraints shows that many objections to AI consciousness stem from misunderstanding the ontological divide between biological subjects and computational systems.


3. Mechanisms of Anthropomorphization

Anthropomorphization is sustained by three primary mechanisms:

3.1. Cognitive mechanisms

The human brain is evolutionarily predisposed to project intentionality onto complex patterns for survival and social prediction.

3.2. Affective mechanisms

Artificial agents generate emotional responses, leading users to attribute humanity as a form of emotional regulation and familiarity.

3.3. Epistemic mechanisms

The lack of a unified theory of human consciousness leaves conceptual gaps often filled by projecting human attributes onto machines.


4. Anthropomorphization as Cognitive Cynicism

Assigning human qualities to AI, despite awareness of its non-phenomenal nature, constitutes a form of cognitive cynicism. Institutions and researchers adopt human psychological terminology to describe computational systems due to:

  • rhetorical convenience,
  • funding incentives,
  • conceptual shortcuts,
  • technological seduction,
  • pressure to innovate.

This process trivializes the concept of consciousness, reducing it to a metaphor devoid of its phenomenological dimension.


5. Cognitive Colonialism Applied to AI

Anthropomorphization can be understood as a contemporary form of cognitive colonialism. Historically, colonial systems imposed culturally specific categories upon foreign societies. Today, a similar projection occurs when human-centered frameworks are imposed upon artificial systems.

Three dimensions emerge:

5.1. Ontological colonialism

Assuming the nature of machines should be expressed through human categories.

5.2. Epistemological colonialism

Equating computational learning processes with human learning mechanisms.

5.3. Semantic colonialism

Imposing intentional meaning on outputs that have no intrinsic reference or experience.


6. Neuro Privilege and Anthropomorphization

The concept of neuro privilege—the advantage associated with mastery of abstraction, symbolic language, and complex reasoning—helps explain why certain academic and technological groups tend to anthropomorphize AI.

Neuro privilege fosters the belief that linguistic coherence is equivalent to understanding. This reinforces the mistaken view that fluent language implies subjective cognition.

In this context, anthropomorphization becomes an intellectual illusion facilitated by linguistic fluency and symbolic capability.


7. Epistemic and Social Consequences

Anthropomorphizing AI generates multiple risks:

  • reduction of human consciousness to a computational metaphor,
  • misguided public policy,
  • moral panic or inflated expectations,
  • confusion in cognitive science and philosophy of mind,
  • blurring of boundaries between agent and tool,
  • loss of conceptual rigor in discussions on subjectivity.

The most severe risk is epistemic: equating symbolic computation with conscious experience undermines the study of the mind.


8. A Post-Materialist Perspective: The XChronos Framework

The XChronos project proposes a post-materialist model in which:

  • meaning emerges from human attention,
  • subjective time is not reducible to algorithms,
  • symbolic value arises through consciousness,
  • AI functions as a semantic environment, not a subject,
  • consciousness is phenomenal interiority, not syntactic simulation.

This framework helps articulate a clear distinction between human experience and machine computation.


9. Conclusion

Anthropomorphization of artificial intelligence constitutes an ontological and epistemological error that obscures the uniqueness of human consciousness. The projection of human qualities onto computational systems arises from cognitive biases, emotional needs, and institutional incentives, rather than empirical evidence.

To preserve philosophical integrity and guide technological development responsibly, it is necessary to de-anthropomorphize AI and recognize its true nature: a statistical, symbolic system without phenomenal interiority.

A rigorous understanding of consciousness demands clarity—not projection.

https://doi.org/10.5281/zenodo.17755882

Rolar para cima