NextFin news, On November 8, 2025, researchers Jenny Kidd and Eva Nieto McAvoy from Cardiff University and King’s College London published a detailed study in the journal Memory, Mind & Media exploring AI applications designed to simulate conversations with deceased individuals. These AI systems, often termed “deathbots,” utilize a person’s digital footprint—voice recordings, text messages, emails, and social media activity—to recreate interactive chatbots or voice avatars that mimic the personality and speech style of the deceased. This study was conducted by the researchers themselves becoming test subjects, uploading their own digital data to generate “digital doubles” and engaging with AI recreations to assess the authenticity and emotional resonance of these interactions.
The rise of this digital afterlife technology stems from a growing industry that promises to preserve memory in an increasingly interactive and perpetual manner. Situated mainly in Western digital markets, these platforms combine archival memory storage with generative AI capabilities, enabling users to converse with representations of lost loved ones in real-time. The platforms incorporate machine learning algorithms that iteratively refine simulated personalities, aiming to provide an “authentic” emotional connection through ongoing conversational adaptability.
The motivation behind this technology is deeply human: addressing grief, preserving legacy, and extending relational bonds beyond death. However, the research highlights significant limitations. Despite the sophistication of algorithmic replication, AI chatbots often produce responses that feel scripted, emotionally discordant, or mechanistic—especially when discussing sensitive subjects like mortality. For example, cheerful emojis or upbeat phrasing used inappropriately during somber exchanges underscore the AI’s incapacity to fully grasp the emotional complexity of loss.
Furthermore, the study identifies a strong commercial framework underpinning these systems. Far from charitable memorial projects, these platforms operate as tech startups monetizing remembrance through subscription models, freemium tiers, and partnerships with insurers and care services. Emotional and biometric data harvested through user engagement fuel continuous interaction, placing memory itself within a political economy where deceased-related data perpetuates financial value beyond a person’s life. This dynamic situates these AI tools within the broader “emotional AI” market, where affective experiences are designed, measured, and commercialized.
From an analytical perspective, this intersection of technology, memory, and commerce raises critical ethical, psychological, and sociocultural considerations. The technology’s genesis can be traced to advancements in natural language processing, voice synthesis, and machine learning, which exponentially improved AI’s simulation fidelity. However, the human experience of mourning and memory is inherently relational, contextual, and dynamic—qualities resistant to algorithmic capture. The flattening of complex emotional narratives into scripted digital interactions risks trivializing grief and may disrupt traditional mourning processes.
Moreover, the normalization of perpetual digital presence through AI-generated “synthetic afterlives” challenges longstanding notions of death finality. As media theorists highlight, conflating storage with memory may obscure the vital role of forgetting in healthy remembrance, potentially leading to a digital liminality where the deceased exist in an endlessly interactive, artificially updated state. This shift could transform social understandings of death and legacy, with profound long-term cultural effects.
Looking forward, the digital afterlife industry is poised for significant growth, driven by expanding consumer demand for personalized memorialization and the ubiquity of digital traces. Despite current shortcomings, continuous improvements in AI emotional intelligence and context awareness may enhance the authenticity of simulated interactions. However, this evolution will likely intensify regulatory debates surrounding data privacy, consent from the deceased, and the psychological impacts on users engaging with digital avatars of lost loved ones.
The integration of AI deathbots within healthcare and counseling services could emerge, offering novel grief support tools that augment or complement traditional therapy. Yet, this potential is contingent on rigorous ethical guidelines to prevent exploitation and ensure respectful handling of sensitive digital afterlives. Additionally, interdisciplinary collaboration between technologists, psychologists, ethicists, and legal experts will be essential to navigate the complex terrain where technology, commerce, and human emotion intersect.
In conclusion, the research reveals that while AI can simulate conversations with the deceased, these experiences illuminate the intrinsic limits of technology in replicating the living complexity of memory and relationships. The commercial exploitation embedded in these platforms foregrounds a new economic and cultural paradigm where death and memory become serviceable commodities. Stakeholders must critically assess these trends to balance innovation with empathy, respect, and ethical responsibility as AI continues to reshape how societies remember and relate to the past.
According to the comprehensive investigation by Kidd and McAvoy published in The Conversation, these “deathbots” prompt reflection on the evolving, data-driven nature of memory and the societal implications of creating synthetic afterlives that blend mourning with monetization.
Explore more exclusive insights at nextfin.ai.
