How Does Technology Address Our Emotional Needs in Modern Society?; How can we interpret the rise of conversational systems simulating the dead? How should we understand this phenomenon, assess its potential risks, and determine its legal boundaries?
During my doctoral studies, I interrogated the emergence and adoption of post-mortem avatars designed to replicate deceased relatives. In the process, I introduced the term ghostbots to encapsulate these systems—a term that, I believe, harmonizes with various other terms in the extended literature. My thesis integrates a legal analysis with the technical functioning of conversational AI, central to the operational dynamics of these systems, and with a broader engagement with Science & Technology Studies (STS), thereby situating ghostbots within the fascinating and challenging interplay of technology and society.
My doctoral research makes three key contributions. First, it provides an explanation of the emergence of ghostbots from a sociocultural perspective. Second, it identifies and categorizes the distinct risks associated with the adoption of these technologies. Third, it offers a critical examination of current legal frameworks. I argue that, although existing data protection and AI regulatory models might offer limited oversight, they fall short of effectively addressing the substantial challenges associated with ghostbots. In response, I propose an expanded legal assessment and the advancement of alternative mitigation strategies. Ultimately, my thesis acknowledges that even the most ambitious legal paradigms may prove too porous to govern the simpler, yet equally intriguing, forms of AI adoption and engagement.
Throughout this scholarly journey, I have been profoundly fortunate to receive the thoughtful guidance of Benjamin Farrand and Edina Harbinja, mentors whose insights continually pushed and tested my ideas. Ben’s unique critical framing of the interplay between law, politics, and the regulation of emerging technologies, paired with Edina’s pioneering work and world-leading research in postmortem privacy, have been instrumental in shaping my research. My time at The Alan Turing Institute in London was equally transformative, the academic year of my research placement opened interdisciplinary dialogues that broadened my scholarly profile.
Reflecting on my doctoral experience, I can say I never intended my work to evolve into a narrowly defined research path. This is evident in the way I now discuss the topic: as more of a case study than a lifelong vocation. I do not see myself doing ‘ghostbot research’ in the future, and I view this not as a limitation but as a natural evolution of academic interests. I’ve always argued that a PhD isn’t about becoming an expert on a static topic. After all, the intellectual landscape is ever-changing, and claiming absolute mastery is, frankly, pretentious. Instead, the doctoral experience is about acquiring new skills and refining existing ones to interrogate social realities and conduct meaningful research, in my case, the legal analysis of digital technologies. For me, the PhD represented this transformative phase: a period of growth that enabled my transition from a public policy officer and early judiciary practitioner into a full-time academic.
I remain deeply grateful to Lilian Edwards (my first supervisor until her retirement), as well as to Ben, Edina, Newcastle Law School, and the Alan Turing Institute, all of whom played vital roles in this transformative process.
