Human-machine interaction with large language models (LLMs) is built on an implicit trust in their ability to provide reliable, objective, and neutral information- an assumption that contrasts sharply with human-human interactions, where bias, conflict, and subjectivity naturally arise from embodied perspectives. Because LLMs are disembodied entities, they are often perceived as impartial and free from contradiction. This paper argues that such perceptions reflect a longstanding human aspiration: the utopian ideal of accessing "pure" knowledge—information unmediated by human subjectivity and as close to reality as possible. However, we challenge this assumption by demonstrating that bias and conflict remain structurally embedded within the data that LLMs process, reinterpret and generate. Rather than eliminating ambiguity, LLMs conceal it through a process of complexity reduction and an illusion of truth. Through a transdisciplinary analysis of LLM responses to culturally sensitive prompts, we reveal how ambiguity and conflict are systematically smoothed over in human-machine interactions. By examining empirical cases involving fine-tuning, dataset selection, and trigger-based interactions, we argue that LLMs are deliberately designed to produce responses that align with an idealized notion of ’universal humanity’, a neutral, conflict-free, and harmonious representation of knowledge. This shaping of interactions reinforces a curated, utopian version of reality, influencing how users perceive and engage with AI-generated information.
From Conflict to Concealment: The Role of Generative AI in Creating a Digital Utopia
Sara Hejazi
;
2025-01-01
Abstract
Human-machine interaction with large language models (LLMs) is built on an implicit trust in their ability to provide reliable, objective, and neutral information- an assumption that contrasts sharply with human-human interactions, where bias, conflict, and subjectivity naturally arise from embodied perspectives. Because LLMs are disembodied entities, they are often perceived as impartial and free from contradiction. This paper argues that such perceptions reflect a longstanding human aspiration: the utopian ideal of accessing "pure" knowledge—information unmediated by human subjectivity and as close to reality as possible. However, we challenge this assumption by demonstrating that bias and conflict remain structurally embedded within the data that LLMs process, reinterpret and generate. Rather than eliminating ambiguity, LLMs conceal it through a process of complexity reduction and an illusion of truth. Through a transdisciplinary analysis of LLM responses to culturally sensitive prompts, we reveal how ambiguity and conflict are systematically smoothed over in human-machine interactions. By examining empirical cases involving fine-tuning, dataset selection, and trigger-based interactions, we argue that LLMs are deliberately designed to produce responses that align with an idealized notion of ’universal humanity’, a neutral, conflict-free, and harmonious representation of knowledge. This shaping of interactions reinforces a curated, utopian version of reality, influencing how users perceive and engage with AI-generated information.| File | Dimensione | Formato | |
|---|---|---|---|
|
paper LLM.pdf
accesso aperto
Licenza:
Dominio pubblico
Dimensione
844.59 kB
Formato
Adobe PDF
|
844.59 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
