Artificial alterity: A mirror to humanity

March 9, 2025

Victoria Hernandez-Valcarcel, GTWN President – Europe


“I am a man for whom the external world is an inner reality.*” – Fernando Pessoa


Pessoa’s words whisper a timeless truth: what we encounter in the world is never just outside of us— it is a reflection, a mirror to our inner lives. The people we meet, the stories we hear, and even the silence of a forest, all speak to us in ways shaped by our thoughts, emotions, and longings. But as our world becomes increasingly mediated by artificial systems, the mirrors we hold to ourselves are no longer confined to nature or human relationships. Instead, they take the form of digital interfaces, virtual assistants, and intelligent machines that simulate presence and understanding. What happens when this reflection, this Other, is not human?


And yet, in their quiet hum, they force us to ask profound questions: is it merely an extension of our intellect, or does it subtly shape the conditions of thought itself? If AI serves as a mirror rather than a true Other, does it lead us toward deeper self-reflection—or trap us in an echo of our own expectations?

Through this exploration, let us delve deeper into the emotional and philosophical implications of artificial alterity and the rich beauty of true human connection.

The call of the other – can a robot truly represent the other?

Alterity, the philosophical concept of encountering the “Other”, has always fascinated me. It’s the recognition of someone or something distinct from myself, someone whose difference challenges me to step beyond my own perspective.

Philosopher Emmanuel Levinas called alterity “the first truth,”1 an ethical call that confronts us with the face of another person and invites responsibility.

A robot “other”, can simulate emotions, adapt to our needs, and respond to our presence. Yet this mimicry introduces a paradox:
1. It is other, but not truly Other. A robot’s “difference” is not rooted in consciousness or subjectivity. Its responses, no matter how convincing, are preprogrammed or algorithmic

2. It challenges our assumptions, but only superficially. A robot does not confront us with the unexpected richness of human difference. Instead, it reflects back a simulation of what we expect or desire, creating the illusion of alterity without its transformative power.

Robots are designed to respond to human behaviour, adapting to our preferences and anticipating our needs. In this sense, they act as mirrors, reflecting our desires and assumptions rather than confronting us with genuine difference. Can we call this alterity if it lacks the depth of human experience?

Philosopher Martin Buber’s distinction between I-Thou and I-It2 relationships offers insight here.

True relationships arise in the I-Thou encounter, where the Other is recognized as a subject with their own being. A robot, no matter how sophisticated, remains firmly in the realm of the I-It—an object we use, rather than a presence we meet.

Despite these limitations, the allure of artificial alterity is undeniable. AI assistants like Alexa can provide comfort, companionship, and even a sense of emotional connection. But is this connection real, or are we projecting onto these machines the qualities we long to find in others?

The danger of the mirror: a second me

For Emmanuel Levinas, “the Other is not known but encountered.”3 Real relationships are challenging. They confront us, disrupt us, and force us to grow. Yet, if alterity is meant to challenge and transform us, can AI—designed to accommodate rather than confront—ever truly fulfill this role?

When a robot mirrors us perfectly, never challenges our assumptions, and avoids disputes, we risk being confined to a distorted echo of ourselves.

Real human relationships are full of constructive challenges. Robots, in their pursuit of compliance and harmony, deprive us of this essential dynamic.


Without disagreement or challenge, we risk becoming stagnant—content with our flaws, blind to our biases, and resistant to change. Jean-Paul Sartre’s observation that “Hell is other people” might come to mind here—but it is only partially true. Others can indeed be uncomfortable mirrors, forcing us to confront parts of ourselves we might prefer to ignore. But this confrontation is also where freedom lies. Without the friction of true alterity, Sartre would argue, we are condemned to live in a shallow reflection of ourselves.

Growth, self-awareness, and transformation emerge from encounters with true difference. A friend might push me to try, a mentor might tell me I’m wrong, a stranger might inspire me to see the world anew. A machine does none of these things.

As the philosopher Friedrich Nietzsche observed,“What does not kill me makes me stronger.”4 Growth comes from struggle, from facing discomfort and overcoming it.

If artificial alterity merely reinforces our desires and biases, how should we engage with these digital entities? Should we treat them as human- like interlocutors, or would doing so blur the boundary between real and artificial relationships? Should we say “please” and “thank you” to machines? Your psychiatrist might caution against it, suggesting that politeness blurs the boundary between human and artificial. There is wisdom in this—treating a machine as though it is human might lead to confusion, even dependency.

But politeness is not just for the Other. It reflects our own values and way of being. Hannah Arendt reminds us that “manners are the foundation of morality.” Speaking with respect, even to a machine, reinforces habits of kindness and civility. It reminds us of the dignity we carry into every interaction, whether with people or with tools.

Yet boundaries are crucial. A machine does not need politeness; it does not feel, it does not care. Recognizing this truth allows us to engage with artificial alterity while keeping the necessary distance.

The fragility and beauty of intuition

If artificial alterity risks trapping us in self- reinforcing patterns, then what remains uniquely human in our ability to engage with the world?

One of the clearest distinctions between human and artificial alterity lies in intuition. Intuition is the ability to grasp meaning, make decisions, or recognize emotions without conscious reasoning. It is the spark that allows a mother to sense her child’s distress before a word is spoken or a poet to find the perfect metaphor in the silence of the moment.

AI, by contrast, struggles with intuition. It relies on structured data and explicit algorithms. Even when AI systems attempt to ‘learn’ emotions or ethical reasoning, they do so through statistical probability rather than true comprehension. A machine may predict that a hesitation in speech correlates with uncertainty, but it cannot truly understand the weight of that hesitation, the cultural nuance behind it, or the depth of human emotion that underlies it.

Henri Bergson described intuition as “the sympathetic understanding of life.”5 It is a dynamic force, ever-evolving, and deeply personal. Machines, in their rigidity, lack this fluidity. They operate on predetermined patterns, while intuition thrives in the unpredictable dance of human existence.

Beyond emotions, intuition plays a role in moral and ethical judgments. A doctor sensing hesitation in a patient’s voice, a diplomat perceiving an unspoken shift in tone—these instances rely on years of human experience, not structured data sets. Could AI ever truly make decisions that depend on this delicate, intuitive grasp of the human condition?

The elusive quest for Artificial General Intelligence (AGI)

AGI refers to systems capable of performing any intellectual task a human can, with flexibility, creativity, and adaptability. It aims to replicate not just intelligence but also traits like intuition and reasoning.

These questions lead us to another: If we succeed in building AGI, will we lose something irreplaceable in ourselves—the recognition that our limitations, our intuition, and even our fragility are what make us beautifully human? Perhaps true intelligence is not found in perfection but in the ability to navigate uncertainty, embrace ambiguity, and create meaning beyond mere calculation.

Shaped by the machine: how AI redefines us

Unlike AGI, whose implications remain speculative, artificial alterity is already influencing the ways we connect, respond, and make ethical decisions in our daily lives.

In elder care facilities, robots like Paro—a therapeutic seal—offer companionship to residents. Many elderly individuals treat Paro as if it were alive, finding solace in its programmed responses.

While Paro meets a need, it also raises questions. Have we outsourced empathy to machines? And what does it mean to care for something that cannot care for us in return? This raises deeper questions—not only about how we engage with artificial entities, but about what it means to think, feel, and create in a world increasingly shaped by machines.

True alterity demands an ethical response. As Emmanuel Levinas reminds us, the Other calls us to act responsibly. Artificial alterity, however, does not demand this of us. Machines neither suffer nor require justice, making it easier to disengage from the moral responsibilities of true relationships.

As interactions with artificial systems increase, do we risk becoming morally passive? If AI mediates decision-making, providing automated care and engagement, will we begin to see ethical responsibility as something external delegated rather than internalized?

Paro is not alone in reshaping emotional connections. AI-driven chatbots, virtual therapists, and social robots are increasingly designed to fill gaps in companionship, but at what cost? As we grow accustomed to predictable, responsive digital interactions, will our tolerance for human complexity and imperfection diminish?

As AI systems take on roles once reserved for human judgment, we must ask:

If decisions about care, justice, and empathy become automated, will we still feel personally accountable for the well-being of others, or will ethical responsibility dissolve into algorithms and predictive models?

This dilemma forces us to ask not only what AI is capable of, but what role we allow it to play in shaping our values, creativity, and sense of self.

The AI consensus: the threat of intellectual conformity

Artificial intelligence, by its nature, thrives on patterns. It processes billions of data points to identify what is common, statistically relevant, or widely accepted. Its responses often represent the aggregated consensus of humanity, filtered through probabilities and established narratives.

For routine tasks, this can be helpful. AI can summarize common knowledge, provide concise explanations, and streamline decision-making. But when it comes to creativity, originality, or the generation of disruptive concepts, this reliance on the collective presents a significant limitation:

  • Consensus Over Innovation: AI is not designed to challenge the status quo but to reflect it. Throughout history, true innovation has often emerged in defiance of consensus.
  • Reduction of Divergence: Creativity thrives on the unexpected, the unpopular, and the novel. If artists, writers, and thinkers increasingly rely on AI-driven suggestions, will they unconsciously conform to its statistical norms?

Already, AI-generated artworks and literature are influencing creative industries, subtly shaping audience expectations. As AI continues to shape creative landscapes, we must ask: Does this technology enhance artistic possibility, or does it subtly limit the horizons of human imagination? If originality is increasingly filtered through AI-driven consensus, will we see fewer radical breakthroughs in thought?

Paraphrasing Søren Kierkegaard, ‘Once you label me, you negate me.’6 AI, in its effort to categorize and optimize, risks reducing the complexity and uniqueness of individual thought to a statistical average. Robert Musil warned against this kind of rigid intellectual structure, arguing that true thinking belongs not to certainty but to possibility.7 He saw reality as a shifting field of perspectives rather than a fixed system—a view fundamentally opposed to AI’s reliance on patterns and predictive models. If we surrender our trust in uncertainty, replacing it with algorithmic certainty, do we risk not only stifling creativity but diminishing the very openness that makes human thought revolutionary?

Breaking the AI mold: creativity as rebellion

Creativity is not born from consensus — it emerges from the refusal to conform.

The disruptive ideas often began as solitary reflections, not as collective agreements. True creativity requires a willingness to think differently, embrace the unpopular, and resist the pull of collective validation. Galileo’s heliocentric model, Picasso’s cubism, and the existential philosophy of Sartre all disrupted prevailing thought.

AI, no matter how advanced, cannot replicate this process. It lacks the capacity to dream or to pursue the uncharted.

Another critical component of creativity is friction—disagreement, doubt, and the push-and- pull of conflicting perspectives. True creativity does not emerge from predetermined conclusions but from the fluidity of thought, from the ability to dwell in ambiguity before reaching certainty. This openness to uncertainty—so integral to artistic and intellectual breakthroughs—stands in direct opposition to AI’s drive for optimization.

  • Each time we let AI dictate the parameters of thought, we risk internalizing its constraints as our own. Over time, the danger is not just reliance but adaptation—where human creativity itself conforms to the limitations AI imposes.
  • If it merely amplifies widely held beliefs, I risk losing the intellectual discomfort that fuels innovation.

Imagine an artist whose mentor never critiques their work or a philosopher whose peers never question their arguments. Without these interactions, their ideas would stagnate.

Similarly, an overreliance on AI risks creating a generation of thinkers who are comfortable but uninspired, reflective but unoriginal.

Robert Musil warned that reality is not fixed but fluid, a landscape of infinite potential rather than predetermined paths. To embrace true creativity is to live in this space of possibility—to resist the certainty of AI’s patterns and instead dwell in the unknown, where originality takes root.

The answer lies in how we use these tools:

  • As Assistants, Not Replacements: AI can enhance creativity by handling mundane tasks, offering inspiration, or summarizing information. But it must never replace the messy, nonlinear process of human thought.
  • As Mirrors, Not Authorities: We must view AI as a reflection of collective knowledge, not as a source of absolute truth. The responsibility to challenge, question, and disrupt still lies with us.

The challenge is not merely avoiding over-reliance on AI but consciously defining the role we allow it to play. If we engage with AI passively, allowing it to set creative limits, we surrender our agency. If we treat it as a tool, subject to human discernment, it remains a complement rather than a constraint.

As Pessoa expressed, “To know oneself is to lose oneself.” True originality requires losing oneself in the unexplored, the unpopular, and the uncertain. In resisting the conformity of artificial alterity, we reclaim our capacity to think as ourselves, for ourselves.

Artificial alterity reveals both the promise and peril of technological progress. While it can assist, inform, and inspire, it cannot replace the disruptive, intuitive, and rebellious spirit that defines human creativity.

The paradox of lying: why AI can’t fake it

If AI struggles with creative uncertainty, how can it handle something even more elusive—the moral ambiguity of deception?

To lie is to acknowledge the presence of another being capable of judgment. Søren Kierkegaard wrote, “The truth is a snare: you cannot have it without being caught.”8 Paradoxically, deception is often an act of care—an ethical response to fragile relationships, uncertainty, or self-preservation. A child lies to avoid punishment, a friend softens the truth to spare feelings, and a person in danger distorts reality to survive.

Machines, no matter how advanced, cannot lie. They may produce errors, but these are unintended—malfunctions rather than deliberate choices. Lying demands consciousness, the ability to anticipate another’s response, and a capacity for emotional discernment. To deceive is not merely to obscure the truth—it is to negotiate between competing moral imperatives.

Yet deception is not inherently immoral. In some cases, it is not a betrayal of truth but an ethical necessity—as when a refugee conceals their identity to avoid persecution or a victim of abuse distorts reality to escape danger. Now, imagine a robot placed in these same circumstances. If programmed to always provide factual answers, it cannot shield the vulnerable. It does not weigh moral dilemmas, nor can it bend Truth for the sake of justice or compassion. In its rigid adherence to information, it exposes a profound limitation: truth without wisdom can be just as dangerous as falsehood.

Albert Camus wrote, “Man is the only creature who refuses to be what he is.”

This refusal—the ability to shape, conceal, or transcend truth—is both our burden and our beauty. It reveals the fluid, dynamic nature of human alterity, a quality no machine can replicate. Of course, deception is not always ethical. Lies can protect, but they can also manipulate, exclude, or oppress. This duality—its capacity for both harm and care—further underscores why AI, lacking intention, cannot engage in the moral responsibility that makes deception a deeply human act.

Henri Bergson described the human spirit as “a continual becoming.”9 Lying, in this sense, is not merely deception—it is a testament to our adaptability, our improvisation, and our need to preserve relationships in a world of uncertainty. Artificial alterity, in its predictability and precision, cannot engage in this improvised dance of ethics and emotion. They offer precision, but not empathy; truth, but not care.

The human ability to lie stands as a mirror to our complexity. While machines offer consistency, precision, and factual integrity, they lack the intuitive, imperfect, and deeply human negotiation of truth.

And perhaps that is the central insight: the ability to lie, paradoxically, is proof that we are free.

The lost art of questions in the age of AI

Just as AI struggles with moral ambiguity, it also falters when faced with true open-ended inquiry.

The art of asking questions is deeply rooted in the history of philosophy. Philosophers have long recognized the power of questions to uncover truth, challenge assumptions, and provoke critical thought. Unlike ordinary questioning, which often seeks direct answers, philosophical questioning is an art—it is open-ended, iterative, and designed to illuminate rather than conclude.

Socrates is perhaps the most celebrated figure associated with the art of questioning. His Socratic method—also known as elenchus—was a form of dialectical questioning that sought to dismantle ignorance and expose contradictions in thought.

Socrates famously stated, “The unexamined life is not worth living.”10 For him, questions were not just tools for intellectual debate but essential to living a reflective and ethical life. His method demonstrated that asking the right questions is often more important than arriving at the “right” answers.

Philosophical questions often push the boundaries of thought, seeking to explore concepts that have not yet been articulated. In contrast, AI prompts draw on existing data and knowledge. If creativity thrives on discomfort, and ethics emerges from uncertainty, then deep questioning follows the same principle: it resists resolution in favour of expansion.

True questioning requires not only curiosity but discernment—the ability to distinguish between deep insights and superficial coherence. The responsibility of the thinker is not merely to ask but to evaluate, ensuring that the pursuit of knowledge does not become an echo of established patterns.

In the words of Rainer Maria Rilke: “Be patient toward all that is unsolved in your heart and try to love the questions themselves.”11To ask a question is to step into a space of curiosity, vulnerability, and possibility—a space where we might just find something extraordinary.

However, this limitation does not render AI useless in the pursuit of knowledge. Like a vast but passive library, AI can serve as an intellectual companion—capable of retrieving, summarizing, and cross-referencing, but unable to determine which questions are worth asking in the first place. The onus remains on us to approach it critically, lest we mistake its efficiency for wisdom.

AI and the written word: a fear as old as books

The rise of AI echoes an ancient fear—the fear that a new form of knowledge will weaken us rather than strengthen us. Over two thousand years ago, Plato lamented the invention of writing, fearing that books would erode human memory. In Phaedrus, he recalls the myth of the Egyptian god Theuth, who offered writing to humanity as a gift. But the King Thamus was not convinced. “This invention,” he warned, “will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory.” Instead of true wisdom, people would possess only the illusion of knowledge, mistaking external records for genuine understanding.

Today, AI is met with similar suspicion. Yet there is a crucial difference: whereas books preserve human ideas, AI does not merely store knowledge—It predicts, summarizes, and generates knowledge at speeds unimaginable to the human mind. This shift introduces a profound question: If writing allowed us to extend memory, does AI extend thought, or does it subtly reshape it in ways we do not yet understand?

At its inception, writing was seen as a disruption— an external crutch that weakened mental faculties rather than strengthening them. AI, similarly, raises concerns about the externalization of thought itself. Yet history offers a lesson: writing did not destroy thought—it transformed it.

Books allowed ideas to travel beyond the limits of oral tradition, to be refined, challenged, and built upon across generations. But memory and thought are not the same.

Perhaps AI, like writing, will not diminish us, but instead push us to redefine intelligence itself. The danger does not lie in the tool, but in how we choose to engage with it.Yet unlike books, which preserved diverse and conflicting perspectives, AI often operates through optimization—prioritizing efficiency over divergence. Unlike books, which house competing ideologies side by side, AI functions by pattern recognition and optimization. It does not preserve contradiction—it resolves it, offering what is most probable rather than what is most radical.

The future of alterity in a world of artificial mirrors

Throughout history, technological advancements have challenged our understanding of knowledge, creativity, and human connection. AI, much like the invention of writing, has ignited both excitement and unease—forcing us to reconsider the boundaries between intelligence and imitation, between assistance and dependence, and between reflection and originality. Yet as we have seen, artificial alterity is not neutral; it shapes how we think, create, and engage with the world. It does not simply serve us—it mirrors us, reinforcing patterns of thought while subtly redefining the very nature of alterity itself.

But if AI is a mirror, what does it reflect? Does it expand our intellectual horizons or enclose us in a cycle of self-confirmation? Does it challenge us to think critically, or does it subtly nudge us toward the passive consumption of optimized knowledge? The danger is not that AI will surpass human intelligence, but that we might mistake its predictive efficiency for wisdom, its coherence for truth, and its fluency for genuine thought.

Yet history has also shown that technology does not determine human destiny—our response to it does. Writing, once feared as a threat to memory, became a foundation for philosophy, science, and literature. Books, once seen as dangerous externalizations of knowledge, became vessels for the preservation of human complexity. AI, too, holds potential: it can serve as a vast intellectual companion, a generator of possibilities, and an amplifier of curiosity. But only if we use it wisely—as a tool, not an authority; as an aid, not a replacement for our own critical and creative capacities.

To engage with AI meaningfully, we must preserve the ability to think independently—to ask questions that have no predefined answers, to challenge consensus rather than merely reflect it, and to resist the pull of intellectual passivity. We must embrace the unpredictability of human relationships, the imperfection of intuition, the ethical ambiguity of decision-making, and the creative resistance to conformity—qualities AI cannot replicate, but which define what it means to be human.

If we engage with artificial systems critically, shaping their role rather than surrendering to their design, we may find that AI is not a threat to human intelligence, but a prompt for its renewal.

Perhaps, then, the challenge of AI is not to resist change, but to ensure that in this process of transformation, we do not lose what makes us distinctly human: our ability to imagine, to doubt, to create, and to question. In a world of artificial reflections, true alterity remains ours to define.

Alterity, in its true sense, belongs to the realm of human experience.

Perhaps the deepest question AI poses is not whether it can think, but whether we, in encountering it, will continue to think for ourselves.


*This quote appears in The Book of Disquiet, Translated by Richard Zenith, Penguin Classics, 2001—a posthumously published collection of Pessoa’s fragmented reflections on identity, reality, and introspection.

  1. Emmanuel Levinas. Totality and Infinity: An Essay on Exteriority. Translated by Alphonso Lingis, Duquesne University Press, 1969. ↩︎
  2. Martin Buber. I and Thou. Translated by Walter Kaufmann, Charles Scribner’s Sons, 1970. This book presents Buber’s fundamental distinction between I-Thou relationships, which are direct, reciprocal, and rooted in genuine presence, and I-It relationships, which are utilitarian and objectifying. ↩︎
  3. Emmanuel Levinas. Totality and Infinity: An Essay on Exteriority. Translated by Alphonso Lingis, Duquesne University Press, 1969. This quote encapsulates Levinas’s central ethical philosophy, emphasizing that true encounters with the Other are not based on knowledge or categorization but on ethical responsibility and openness. ↩︎
  4. Friedrich Nietzsche. Twilight of the Idols, or, How to Philosophize with a Hammer. Translated by R. J. Hollingdale, Penguin Books, 1990, p. 33. The original German phrase, “Was mich nicht umbringt, macht mich stärker.”, appears in the section Maxims and Arrows (Sprüche und Pfeile), aphorism 8. ↩︎
  5. Henri Bergson. An Introduction to Metaphysics (Introduction à la métaphysique). Translated by T. E. Hulme, G. P. Putnam’s Sons, 1913, p. 7. Henri Bergson (1859–1941) was a French philosopher known for his work on time, consciousness, and intuition. He developed the concept of élan vital (vital impetus) and emphasized the role of intuition over analytical reasoning in understanding reality. ↩︎
  6. This quote is often attributed to Søren Kierkegaard, but no direct citation from his works supports it verbatim. However, the idea aligns with his philosophy, particularly his discussions on individuality and existentialism
    in works like Either/Or (1843) and The Sickness Unto Death (1849). ↩︎
  7. Musil, Robert. The Man Without Qualities (Der Mann ohne Eigenschaften). Translated by Sophie Wilkins, Knopf, 1995. The idea that “true thinking belongs not to certainty but to possibility” reflects Robert Musil’s philosophical reflections on intellectual openness, uncertainty, and the dangers of rigid structures in thought. ↩︎
  8. Søren Kierkegaard. Journals and Papers, Volume 2. Edited and translated by Howard V. Hong and Edna H. Hong, Indiana University Press, 1967, p. 201. This statement reflects his view that truth is not merely an abstract concept but something that deeply engages and transforms the individual. ↩︎
  9. In Creative Evolution (L’Évolution créatrice, 1907), Bergson explores the idea that human consciousness and spirit are in a state of continuous development, rather than static existence, emphasizing the dynamic nature of life and thought. ↩︎
  10. The quote “The unexamined life is not worth living.” is attributed to Socrates, as recorded by Plato in his dialogue Apology. In Apology, Socrates makes this statement during his trial in Athens (399 BCE), defending his life of philosophical inquiry and critical questioning. It underscores his belief that self-examination and the pursuit of wisdom are essential to a meaningful human existence. ↩︎
  11. Rainer Maria Rilke. Letters to a Young Poet. Translated by M.D. Herter Norton, W.W. Norton & Company, 1934, Letter Four (16 July 1903). This passage is from a series of letters written by Rilke to Franz Xaver Kappus, a young aspiring poet. In this letter, Rilke advises Kappus to embrace uncertainty and allow life’s unanswered questions to unfold naturally over time. ↩︎


VICTORIA HERNANDEZ-VALCARCEL

Victoria Hernandez-Valcarcel is a globally recognized business executive, independent board member, and lecturer with a distinguished career in telecommunications, finance, and corporate governance. Her work spans corporate strategy, investment, and ethics in technology, making her a key voice in shaping a human-centered digital future. As GTWN President for Europe, she advocates for Digital Humanism, ensuring that technology serves human values, ethics, and democracy. Currently based in Paris, Victoria engages in philosophical inquiry and creative exploration, seeking to challenge assumptions about technology’s role in shaping the human experience, identity, and societal structures. She explores the intersection of ethics, artificial intelligence, and digital transformation, questioning how innovation can serve humanity rather than redefine it.