Hostname: page-component-7bb8b95d7b-s9k8s Total loading time: 0 Render date: 2024-09-11T09:13:06.191Z Has data issue: false hasContentIssue false

AI and memory

Published online by Cambridge University Press:  11 September 2024

Andrew Hoskins*
Affiliation:
University of Glasgow, Glasgow, UK

Abstract

This paper is written at a tipping point in the development of generative AI and related technologies and services, which heralds a new battleground between humans and computers in the shaping of reality. Large language models (LLMs) scrape vast amounts of data from the so called ‘publicly available' internet, enabling new ways for the past to be represented and reimagined at scale, for individuals and societies. Moreover, generative AI changes what memory is and what memory does, pushing it beyond the realm of individual, human influence, and control, yet at the same time offering new modes of expression, conversation, creativity, and ways of overcoming forgetting. I argue here for a ‘third way of memory’, to recognise how the entanglements between humans and machines both enable and endanger human agency in the making and the remixing of individual and collective memory. This includes the growth of AI agents, with increasing autonomy and infinite potential to make, remake, and repurpose individual and collective pasts, beyond human consent and control. This paper outlines two key developments of generative AI-driven services: firstly, they untether the human past from the present, producing a past that was never actually remembered in the first place, and, secondly, they usher in a new ‘conversational’ past through the dialogical construction of memory in the present. Ultimately, developments in generative AI are making it more difficult for us to recognise the human influence on, and pathways from, the past, and that human agency over remembering and forgetting is increasingly challenged.

Video Abstract

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
Copyright © The Author(s), 2024. Published by Cambridge University Press

The new reality of memory

The advent of OpenAI's ChatGPT chatbot in 2022, and the recent rapid development and accessibility of AIFootnote 1 and related technologies and services, heralds a new battleground between humans and computers in the shaping of reality. This article asks, at this moment, what does AI's shaping of a new reality mean for what memory is and what memory does?

The 2020s are marked by a convergence of huge computing power with the greatest memory dump in history. The digital participation of billions – producing, exposing, and sharing data and information about personal and public selves and experiences – has forged an astonishing shadow archive of us (Hoskins Reference Hoskins, Garde-Hansen, Hoskins and Reading2009a; Lagerkvist Reference Lagerkvist2014). This shadow archive has been waiting for something to render it accessible, meaningful, and usable, on a planetary scale. That time has come, and that something is generative artificial intelligence.

My central concern here is with the advent of generative AI. This is a step change in the creation of ‘new’ high-quality text, images, and other content such as voice recordings, based on the data they were trained on, via easily usable interfaces between humans and machines. Examples of generative AI include Open AI's Chat GPT,Footnote 2 Google's Gemini (formerly Bard)Footnote 3, and Meta's Llama 3.Footnote 4 OpenAI's DALL-E2Footnote 5 and Stability AI's Stable DiffusionFootnote 6 are specifically developed for generating images and art from text prompts.

The sudden pervasiveness of generative AI services in the form of chatbots is utterly transforming the current and future relationship between humans, technologies, and the past, forging a new AI memory ecology.Footnote 7

Chatbots are computer programmes that respond to users' prompts with human-like replies, as though the individual is engaged in conversation. It is this process of prompting the creation of something in the present in relation to things written, spoken, experienced, and recorded in the past, that sounds like a useful and well-established definition of how human memory works. Many influential approaches in Memory Studies treat memory not as a fixed or static entity, but rather as an active process, whereby the past is reconstructed in the present. What is remembered is not some more-or-less accurate trace of the past, but rather a remaking, reimagining, or revisioning of past events that are significantly shaped by the context of recall (Bartlett Reference Bartlett1932; Middleton and Brown Reference Middleton and Brown2005; Schacter Reference Schacter1996; Wertsch Reference Wertsch2002).

Yet today, it is increasingly AI that generates the context in which memory is produced, and even the memory itself. Virtual assistants, memory apps, and chatbots build on all the fragments of the past that have fed and trained large language models (LLMs) to offer a humanly intelligible response to questions or instructions in a new moment. Further exchanges in turn train or guide AI systems to offer answers more attuned to the prompts they are fed.

Generative AI, and related technologies and services, both enable and endanger human agency in the making and the remixing of individual and collective memory. To help understand these transformations, I draw on and connect approaches from the cognitive and the social sciences to explore two key interrelated features of this remaking and the erasure of the past, that inform my overarching claim here as to a third way of memoryFootnote 8:

(1) AI untethers the human past from the present. It produces a past that was never encoded into memory (never experienced) in the first place. We are now entangled in and confronted by a past that never existed (retrospective) and a future that never will exist (prospective).

Through generating a past that never existed, AI breaks the relationship between the encoding of the past into memory, its storage, and its later retrievability. By ‘encoding’ I mean the ways in which humans perceive, get, and learn information, so that they can store and then later retrieve it as memory (McDermott and Roediger Reference McDermott, Roediger and Butler2018). If information about an event or experience (an episode) is never encoded in the first place, then it will not be retrievable as memory.

The stuff that AI generates from learning through discerning patterns in LLMs fed with huge amounts of data was never perceived or encoded as something intended to be retrievable as memory in the future. AI-prompted memories are generated rather than retrieved. The result is the creation of a new kind of past that never really existed before. As Bowker puts it, ‘It is the pleats and the folds of our data rather than their number that constitute their texture’ (Reference Bowker and Karaganis2007, 24). We can no longer believe our own eyes when confronted by that which seems strangely familiar yet unreal: a kind of uncanny memory.

(2) The ideal of a dialogical construction of memory is appropriated by AI's promise of enabling an eternal conversation with the past you, and the past others.

The smartphone, as both connective and computational, as both portal and archive, continuously scaffolds our lives and memories (Barnier Reference Barnier2010). A new generation of AI scaffolds memory in more immediate and personal ways through conversation. Whereas many digital memory apps and services focus on the taking, collecting, and repurposing of images and videos, the AI past feels eminently sociable as we can chat with it.

Generative AI, through a range of apps and services, enables the living to ‘speak’ with the dead, including in the creation of a chatbot of you. This enables others to have ‘conversations' with you reimagining and remaking your memory beyond the grave.

(3) Through newly creative modes of expression and formation of the past, AI creates a third way of memory, mixing the machinic and the human in new ways. AI overcomes unwanted forgetting, giving memory new hope, yet through its production of a past that never existed, it makes forgetting impossible; the AI agentic past is one without parameters in the machine's new capacity for forging and remaking long-term memory.

Moreover, the third way of memory is to recognise the potential of AI to consort with, challenge, and also replace the agency of human remembering and forgetting. By human agency over memory, I mean an active, willed, functional, deliberating memory, seen as cognitive and as fundamentally part of human identity, that evolves with time and context in and of the present.

The human production of the past is changed and threatened through the spread of AI agents, namely ‘AI models and algorithms that can autonomously make decisions in a dynamic world’ (Heikkilä Reference Heikkilä2024). Zittrain (Reference Zittrain2024) uses the term ‘AI agents’ to describe ‘AIs that act independently on behalf of humans’ and cautions that the ‘routinization of AI that doesn't simply talk with us, but also acts out in the world, is a crossing of the blood–brain barrier between digital and analog, bits and atoms’. Although the term AI agents is not new, it is generative AI (using foundation-based models) that makes agents more universal in that they can learn from the world that humans interact with (Heikkilä Reference Heikkilä2024).

In this article, I present some of the rapidly emergent and experimental uses of AI which are collapsing the boundaries of memory in the head and memory in the wild (Barnier and Hoskins Reference Barnier and Hoskins2018), making the past seem strange and uncanny. I first turn to explore key definitions and recent trends in the nature, uses, and effects of AI on human intelligence and experience, including the (toxic) kind of past that is being created. I then address the nature and consequences of the AI creation of a past that never existed and explore the emergent conversational means of memory. Next, I advance my argument of a third way of memory, examining the potential for human agency (consent and control) faced with a past increasingly made and remade through AI agents (services and bots). Finally, to offer some hope for the third way of memory, I consider ‘glitch memories’, as a case of the use of generative AI in overcoming forgetting and in giving human remembering new vitality and new hope.

Who made the past toxic?

The Association for the Advancement of Artificial Intelligence defines AI as ‘the scientific understanding of the mechanisms underlying thought and intelligent behaviour and their embodiment in machines’.Footnote 9 AI represents a capacity for machines to solve problems and make rational decisions following procedures akin to human processes of learning by repetition and recognition (de-Lima-Santos and Ceron Reference de-Lima-Santos and Ceron2021, 14). Broadly speaking, it is ‘the tangible real-world capability of non-human machines or artificial entities to perform, task solve, communicate, interact, and act logically as occurs within biological humans’ (de Zúñiga et al. Reference de Zúñiga, Goyanes and Durotoye2023, 318).

With the advent of ChatGPT and other chatbots and interfaces, there is a hugely advanced capacity for human-like interaction with these machines, to prompt and produce something that seems like human memory (above). This then begs the question as to the character, function, and finitude of this interactive or conversational production of the third way of memory. This is the production of a past somewhere between, as well as within, the human and the machine, with the latter possessing an increasingly powerful, complex, and opaque memory.

At the same time, there is a form of past forged from an incredible archive accumulated through all our digital trails. How did we get to this point at which a technology can appropriate such an infinite memory? I argue that the relationship between media and memory fundamentally changed in the 2010s (Hoskins Reference Hoskins2011, Reference Hoskins2013, Reference Hoskins and Meade2017a, Reference Hoskins and Hoskins2017b). A digital tsunami upended any sense of certainty once afforded by the trend in the more predictable ‘decay time’ (Hoskins Reference Hoskins2013) of modern media. Today, it is clear there is a new monster of memory. Nothing is left alone anymore! Much of life is augmented, no encounter or experience seems unrecorded or unshared. And the services which promise instant or reliable deletion or erasure of sent messages, images, or videos, are often compromised by the fluidity and easy reproducibility of digital data and information (Hoskins Reference Hoskins, Hajek, Lohmeier and Pentzold2015).

Silence and contemplation are the enemies of the digitisation and datafication of everything (Lagerkvist Reference Lagerkvist2022). Digital devices, apps, and services, increasingly penetrate and constitute everyday experience. Never have billions of individuals instantly produced, recorded, and shared so much data and information about themselves, their experiences, thoughts, preferences, and relationships.

All of this then results in the most massive and complex record of the human past ever accumulated. As James Bridle (Reference Bridle2023) explains,

The big tech companies have spent 20 years harvesting vast amounts of data from culture and everyday life, and building vast, energy-hungry data centres filled with ever more powerful computers to churn through it. What were once creaky old neural networks have become super-powered, and the gush of AI we're seeing is the result.

To ask then about what memory is or does, especially of and from the 2010s, requires attention to the phenomenon of individual digital participation, for it is us that feeds the ghost that haunts us.

The AI memory monster requires huge amounts of data scraped from the web for AI models that feed chatbots. However, when machines consume material made by other machines, what was once a more discernible human past becomes warped. For instance, M. Wong (Reference Wong2023) explains, ‘The problem with using AI output to train future AI is straightforward. Despite stunning advances, chatbots and other generative tools such as the image-making Midjourney and Stable Diffusion remain sometimes shockingly dysfunctional – their outputs filled with biases, falsehoods, and absurdities’. This is a matter of the further automated poisoining of the past. Thus, ‘model collapse’ according to Shumailov et al. is a ‘degenerative learning process where models start forgetting improbable events over time, as the model becomes poisoned with its own projection of reality’ (Reference Shumailov, Shumaylov, Zhao, Gal, Papernot and Anderson2023, 2). They argue, therefore, that ‘the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of content generated by LLMs in data crawled from the Internet’ (Shumailov et al. Reference Shumailov, Shumaylov, Zhao, Gal, Papernot and Anderson2023, 1).

Generative AI does not produce some kind of neutral or idealised past but instead accentuates inequalities, seeds disinformation, and violates personal privacy. Vallor for instance, writes of ‘The AI Mirror’, which renders ‘an image of our humanity arbitrarily both sanitized and polluted’ (Reference Vallor2024, 35).

Datasets of the required scale for LLMs to function efficiently can reproduce (and further hide) the processes of the reproduction of explicit images, violence, rape, pornography, sexism, racist, and ethnic slurs (Birhane et al. Reference Birhane2021), as well as correcting towards heteronormativity (Zawacki Reference Zawacki2023). For instance, as Bridle (Reference Bridle2023) suggests:

AI image generators, in their attempt to understand and replicate the entirety of human visual culture, seem to have recreated our darkest fears as well. Perhaps this is just a sign that these systems are very good indeed at aping human consciousness, all the way down to the horror that lurks in the depths of existence: our fears of filth, death and corruption.

We have never much liked or understood the generation that went before ours or why they did what they did (Lowenthal Reference Lowenthal2012). As well as aiding memory, media through rotting, decay, and obsolescence, have assisted in the obscuring and denial of the worst of human nature, helping to smooth over the now unpalatable acts of our forebears (Hoskins Reference Hoskins, Wang and Hoskinsin press). It is digital participation, however, which produces a more unpredictable and stickier past, despite the creation of new rules and systems (‘moderation’) to filter, erase, and forget that which we cannot – or refuse to – confront in the present (Merrin and Hoskins Reference Merrin and Hoskins2024).

This product of mass digital participation is now feeding the training datasets for foundation models for an array of AI applications and services.Footnote 10 To attempt to stem the flow of LLM's generation of toxic and harmful content requires human input, for individuals to look at (and remember) the very worst of ‘the horror that lurks in the depths of existence’ (Bridle, above) so you don't have to. Low-paid workers in Kenya, for instance, were employed to screen out violence and sexual abuse in the development of OpenAI's ChatGPT. They claimed to have suffered trauma, anxiety, and depression in the process.Footnote 11 The AI generation or processing of this past requires a new layer of human screening. The more machines push humans out of the memory loop, the more humans are needed to make the past tolerable and sanctionable, within the mores of the present.

This emergent battle over what the past means in the present, however, is fundamentally different from the long history of the conflict over memory. This is because generative AI is not only, or even mostly, representing or producing a past (good or bad) that was once lived, experienced, and shared. The AI past is, rather, being rendered through that collected, aggregated, mined, sifted, and sanitised, which has not been formed and made accessible in such a way before. In this way, I now turn to address the nature and consequences of the creation of a past that never existed.

The past that never existed is here

AI untethers the human past from the present; it produces a past never encoded into memory in the first place, so that we are now entangled in and confronted by a past that never existed. Individual digital participation is amassing an astonishing record of human experience, action, and movement, a fusion of communication and archive, used to watch, identify, monitor, exploit, persecute, target, and kill (Hoskins and Illingworth Reference Hoskins and Illingworth2020). Our personal data are used increasingly without our consent. We do not and cannot possess a full grasp of the future uses and abuses our digital trails, seemingly private and public.

Our own digitally enabled production of information and data about ourselves and others has potentially profound impacts on memory, consciousness, privacy, and agency. Fed by huge amounts of data – much of this our personal data – AI models are increasingly able not only to aggregate inputs at a scale beyond the capacity of a human mind but to generate novel artefacts from these aggregations (Magni et al. Reference Magni, Park and Manchi Chao2023, 2). And it is through individual and mass digital living in this century, through all our visible and invisible digital trails, that we have seeded (and conceded) the basis from which a past that never existed could be created.

The way personal data about individuals through their uses of digital apps and platforms, is harvested, stored, reused, and sold, in often opaque ways, has been largely accepted as an unwanted but unavoidable trade-off between access to essential services, and a loss of privacy. ChatGPT's rolling out of ‘memoryFootnote 12’ in early 2024Footnote 13 is in effect a personalisation of this kind of surveillance, in the name of even more convenience, and enhanced memory. ChatGPT offers two kinds of memory service. The first is where you tell it to remember something specific about you, the second is where ChatGPT learns from you as you interact with it, ‘converse’ with it.Footnote 14 In this way, Chatbots and other AI-driven apps and platforms offer both ‘explicit’ and ‘implicit’ (Erll Reference Erll2022; Schacter Reference Schacter1987; Schacter and Graf Reference Schacter and Graf1986) personal memory services.

Many of the explicit memory services of the digital era are focused on the recording, storing, editing, organising, aggregating, and sharing of your photographs and videos. Some are more implicit, in that they offer an always-on service that both continuously records information about your actions and experience and render this accessible to you. Sensecam – a small sensor-equipped camera worn around the neck, which takes point-of-view images every time the camera moves, or every 30 seconds, was seen as pioneering in the digital era. For instance, Martin Conway, Shona Illingworth, and Catherine Loveday have experimented with Sensecam to help an amnesiac patient – Claire – to remember (Albano Reference Albano2022; Illingworth Reference Illingworth2015).

AI platforms go further than digital devices (such as Sensecam) by increasingly blurring what might have been once thought of as implicit or explicit, unconscious or conscious, individual or collective, in the recording, remediating, and repurposing of experience. They do this in two related ways. The first is in delivering on what was once only an imaginary or even fantasy of total memory (which I return to consider below). The second, is in producing a past we don't need to humanly remember.

For example, Personal.AIFootnote 15 offers a ‘digital version of you’, a kind of personal memory machine. It works through creating a time-bound structured dataset called a ‘memory stack’, made up of ‘memory blocks’. Each block is a unit of data in your life that is associated with a specific time, certain people, certain context, certain emotion, which are connected in the stack. I spoke with Suman Kanuganti, Personal.AI's founder and CEO, who explained: ‘It's almost like an automated database that is spinning for you, behind the scenes. The goal was to automate it to a degree where people don't have to worry about it’.Footnote 16 Personal.AI is built on the idea of an AI of you that is created from data and information that you feed it or allow for it to be fed.

To grasp the transformation of AI's production of a past that never existed, it is useful to consider approaches to forgetting in the human sciences. A key distinction is made between forgetting owing to a failure to encode information in the mind into long-term memory, and forgetting due to a failure to retrieve from long-term memory (Erdelyi Reference Erdelyi2006).

In the first case, the past remains unencoded, so information about an event or experience never made it into memory in the first place. Perhaps this could include the unremarkable, the unnoticed, or the unrecorded. This might happen owing to a lack of attention. In the second case, forgetting occurs owing to a failure to retrieve information. This might happen due to the degrading of memory over time, motivated forgetting, silencing, or suppression.

Generative AI turns this formula of forgetting on its head.

Firstly, it enables the extraction and reinvention of a past that was never noticed, experienced, or initially encoded into memory. It does this by hoovering up our digitally scattered lives, including personal data, information, and images, and aggregating these inputs to shape novel artefacts. This is a kind of new, ‘new memory’ (Hoskins Reference Hoskins, Prescott and Wiggins2023). Just a note on this idea of new memory. This is something that I have written about for over two decades. It highlights that remembering is a process that is inevitably shaped in and through the present and is thus entangled with the nature, forms, and control of the technologies, media, and institutions of the day. New memory also signals that the value afforded by individuals and societies to remembering and forgetting changes over time in relation to these same entanglements. AI-driven chatbots and services force a deeper entanglement of human memory in the tech of the day. But this memory is also new in that it is made in part at least from an unlived past, one that was never experienced to be humanly remembered in the first place. It is no wonder then today it feels uncanny.

Secondly, digital participation leads to an overproduction and oversharing of information. Merrin (Reference Merrin2021, 18) for example, argues: ‘Today, almost nothing escapes potential capture, shareability, and being added to the pornographic, hypervisible, hyperintimate collection of the museum of the real’. This feeds the long-established idea that if we collect and combine as much representational, archival, and circulatory technologies, discourses, and witnesses of the day, then this in some way secures the past. But the belief in, or fantasy of, the digital recording of everything, begs the question, of what exactly will be humanly accessible, by whom, and for how long?

The answer lies in the fact that a new curator – AI – has arrived to take charge at the ‘museum of the real’.

AI has reversed the formula of forgetting by both feeding off, as well as offering the reality of, a total memory. This is precisely what AI-enabled memory apps such as Rewind.AI and Personal.AI offer – to host or create your past so that it becomes something that you do not need to remember.

A more provocative view is found in those who see scale as a solution to securing the future memory of today. Van der Werf and Van der Werf (Reference van Der Werf and van der Werf2022, 987) for instance, write: ‘we need to embrace digital society and understand that information overload is a prerequisite for it to thrive’ and provide future generations with large quantities of data that they can then deliberate over and make decisions about. They see the potential for future generations to use ‘AI-based tools to construct multiple collective and individualised memories, histories, and truths. The more data we leave behind the richer their storytelling and memorialising will be’ (ibid).

This view suggests a panacea of total memory, a free-for-all of infinite wisdom, where all lessons are learnt in the museum of the real. But having or seeming to possess the memory, for instance on digital devices, in the cloud, or in archives, is very different from accessing the memory, in other words being able to find, retrieve, understand, and use it. The past is surely as abundant as it has ever been but availability at scale is no guarantor of memory's security. Rather, it is overproduction and the complexity of the digital archive, personal or collective, that hampers accessibility (Hoskins Reference Hoskins, Wang and Hoskinsin press).

AI challenges what accessibility means by providing a proxy accessible ‘memory’ in place of that which is unavailable. Google, for instance, in its privacy policy, states that it uses ‘information that's publicly available online or from other public sources to help train Google's AI models and build products and features using these foundational technologies.Footnote 17’ All of our public pasts are suddenly vulnerable at scale, the entire history of our digitally entangled selves, to the forging of a new and novel memory, a memory trained and extracted from the archive of us.

AI renders the past conversational

The idea of a dialogical construction of memory is appropriated by AI's promise of enabling an eternal conversation with the past you, and the past others. There are many AI services designed to enable your loved ones to converse with your chatbot, from your memory, once you are no longer alive or able to communicate.

The potential of AI has caught the attention of those keen to secure the end of living memory of a generation, seen as significant for having lived through or experienced an event of historical importance. For example, the testimony and memories of survivors of the atomic bombings on Hiroshima and Nagasaki, known as hibakusha, are seen in this way, not least for their capacity to warn of the horrific consequences of the use of nuclear weapons. As the average age of hibakusha is now over 85, museums, policy makers, and community organisations are actively considering the use of AI to extend the virtual presence of survivors, as this precious living collective memory vanishes.

For example, Japan's national broadcaster NHK developed a project which in early 2022 recorded the testimony of Mrs. Yoshiko Kajimoto, a survivor aged 14 and 2.3 km from the hypocentre of the atomic bomb dropped by the US on Hiroshima on the 6th of August 1945. Mrs. Kajimoto had to answer 900 questions.Footnote 18

This database of recordings was developed through the AI system being taught the terms and historical background used in the war topics discussed in Mrs. Kajimoto's answers, including food, clothing, and shelter, to build a network of memories. Participants who come to the museum (or wherever the system is installed) can then pick from 99 selected questions to ask the life-size image projected in front of them which responds with the pre-recorded answers, as though they were engaging in a natural and live conversation with Mrs. Kajimoto. This service claims to offer a new, sustainable way to convey the experience of meeting an atomic bomb victim in person, to hear their testimonies directly.Footnote 19

Mrs. Kajimoto explained to me that in addition to the tiring five full days for a then 91-year-old interviewed so that her answers could be recorded, another principal challenge for this project was in attaining the approval of her sceptical family for her taking part.Footnote 20 In addition to her welfare, Mrs. Kajimoto family's concerns included the matter of how and by whom the recording might be used in the future. The ‘AI-based Testimony Response Device’ appears to be owned by the broadcaster NHK who commissioned the project, without Mrs. Kajimoto having any specific ownership of the recorded content.Footnote 21

This example is a tentative first step in Japan's official exploration of the use of AI to preserve the testimony of a generation that possesses the living experience and memory of the defining catastrophe of the US atomic bombing of the country almost 80 years ago. But the critical juncture and decision the country now faces is whether it will take the leap to using generative AI in its cause of preserving memory. The NHK project was conceived before the emergence of the creation of foundation models, which offer new kinds of creativity from training on a broad set of unlabeled data, with much wider application (Murphy Reference Murphy2022). The latest generation of ‘Expressive AI Avatars’, for example, according to the company Synthesia, offer ‘dynamic and lifelike digital personas that blend the best of human and artificial intelligence into one seamless experience’.Footnote 22 Synthesia claims:

Expressive Avatars don't just mimic human speech; they understand its context using our custom built EXPRESS-1 model. Whether the conversation is cheerful or sombre, our avatars adjust their performance, accordingly, displaying a level of empathy and understanding that was once the sole domain of human actors.Footnote 23

These features help deliver a convincing conversational, human-like, and trustworthy interaction. This all begs the question, what would happen if the living memory of a generation, seen as vital for their carriage of the first-hand witnessing and survival of a nodal event in defining a group or nation's history and identity, was suddenly unleashed to being remade through such rapidly accelerating reality-morphing technological change? A fear of this risk is causing stasis (in deploying AI at any kind of scale) in the institutions charged with preserving and protecting the collective memory of the hibakusha. Beyond the constrained responses of the avatar of Mrs. Kajimoto (above), advanced generative AI models trained to capture human-like features and open to more creative utterances and responses, would render the words and voices of a group of survivors defining a generation, open to a greatly uncertain fate and future.

Certainly, generative AI adds to the capacity of digital networks and tools to transform the lifespan as well as the nature of memory of an individual and a society. The idea of the living memory of a generation is influential in the study of collective memory, especially entangled in interpretations of the work of Halbwachs, who argued ‘Our hold on the past … never exceeds a certain boundary, which itself shifts as the groups to which we belong enter a new phase of their existence. It is as if the memory needed to be unburdened of the growing mass of events that it must retain’ (Reference Halbwachs1980, 120). The past used to dissipate, decline, and decay, in and through the media of the time, in mostly comprehensible ways and at predictable rates (Hoskins Reference Hoskins2013). Today's AI memory is instead burdened anew in its infinite potential to be remade and repurposed. This creates a new impossibility of human forgetting, which I return to consider below. At the same time, given the seamless human and machinic way, promised above, who will be able to ascertain what was ever real or intended or consented to as a form of the remembering or remembered human subject?

The hibakusha example is a case of a prospective form of arrangement, under more pressing consideration at the end of the living memory – of a person or a generation. However, it is AI's retrospective force on the past that is being used to an array of new ends, busily resurrecting the voices and images of the dead. For instance, the deepfaking of the voices of children killed in shootings in the United States (see below), is a new kind of AI-driven memory activism, part of a wider trend in the radicalisation of memory.

AI cloning is not something restricted to those professionals trained in the technology but rather is moving rapidly into use by anyone, including those suffering from grief. Madeline de Figueiredo, a woman widowed in her twenties, used AI voice cloning on the digital recordings of her dead husband, Eli, to enable her to have one last ‘impossible conversation’ with him (de Figueiredo Reference De Figueiredo2024). She writes of her experience:

It's hard to explain the feeling that came with hearing Eli's voice speak the novel language after nearly two years of his absence … In some ways, it was worse than reality, and in other ways, it was better. I felt as though I had been knocked into a different dimension that was simultaneously disorienting and blissful. I wanted to linger forever in its potential and immediately eject myself from the self-deception. (ibid.)

The astonishing potential of generative artificial intelligence, then, to create human-like features of interaction, including conversation, in the creation of forms and presence of those no longer living, jars against an underlying sense of reality of the present.

At the same time, there is a seduction in the promise of media, accelerated in the digital era, that if only we can record everything then this will afford a future of greater access to and control over the past. However, the benefits of the amassing of a total memory also involve the creation of a memory that is no longer your own.

These ideas of the creation and uses of an all-encompassing version of you appear to have become the reality of memory in the AI era. For example, apps like Heyday record everything we read online, then sort the information into accessible categories, allowing us to ‘outsource’ our memories to the AI (Agarwal Reference Agarwal2023). Similarly, MindOS Memory TwinFootnote 24 lets us export our memories and thoughts into an ‘AI powered companion’, so you can ‘look back on any moment or even collaborate with your Twin using their unique memory of you'. Furthermore, the Personal.AIFootnote 25 app can be employed to create a ‘unique model that truly represents who you are’, including voice cloning features, allowing users to create a ‘complete’ version of themselves in the form of bespoke AI.

The emergence of both the conversational past and the past that never existed not only reshapes and remixes individual human and collective memory but reconstitutes what it means and what is possible (and impossible) to forget. Who has access and control over this new relationship between human and machine, and its messy imbrication of a given individual's personal information and history in the computer model, as revealed through their conversation, is part of a new battle over the future of memory. I now turn to address the role of AI in transforming the relationship between remembering and forgetting, and the uncertain space in-between.

AI agents of memory and the deadbot memory boom

The third way of memory recognises how the related growing autonomy and infinite potential of AI agents to be remade and repurposed, and to remember, semi or fully independently of human control and oversight, smashes the once more distinct individual, social, archival, and generational boundaries of the past (and influential on the conceptual and structural basis of the field of Memory Studies).

AI agents are redefining the memory of a lifespan of an individual and a society, messing up ideas and assumptions as to the finitude of media and of memory, including notions of decay, obsolescence, and an array of established ‘forms of forgetting’ (Assmann Reference Assmann2014). For instance, as Zittrain (Reference Zittrain2024) puts it: ‘There's simply no way to know what mouldering agents might stick around as circumstances change’.

A striking trend in only the past two years to this end is in the deadbotFootnote 26 memory boom.Footnote 27 This is how chatbots and other AI-driven systems in enabling new forms of, and communication with, increasingly interactional representations of the dead, transform how individuals and societies are remembered and (not) forgotten. What will the past look like when deadbot memories both outnumber and outlive the human?

Deadbots are AI-driven systems which emulate the personality, behaviour, voice, and/or appearance of persons already deceased, or created with the intent of emulating someone once they are dead. Hollanek and Nowaczyk-Basińska (Reference Hollanek and Nowaczyk-Basińska2024) define a deadbot as ‘an AI-enabled digital representation of a deceased individual created by a re-creation service’. Moreover, they are designed to be interactive with the living in a way that mimics how the deceased might have communicated. These include chatbots of you, be these audio or avatars created by consent, which may offer comfort to the bereaved and assuage trauma,Footnote 28 or deepfakes that are not, all bringing the living and the dead into new relationships.

There has long been a concern with death in the digital age from the risks of inaccessibility (or obsolescence) of our digital selves (social media, images, messages, connections, and so on) for all those we leave behind who want to remember us. The so called ‘digital afterlife industry’ (Bassett Reference Bassett2022; Kasket Reference Kasket2019; Lagerkvist Reference Lagerkvist and Hoskins2017; Öhman and Floridi Reference Öhman and Floridi2018; Sisto Reference Sisto2020; W.H. Wong Reference Wong2023) became firmly established in the 2010s. This includes a huge array of services and platforms devoted to digital forms of memorialisation, and the persistence and preservation of the digital you, after your death.Footnote 29

Yet AI agents blur the boundaries between the living and the dead in new ways. In ‘the post-mortal condition’ as Öhman calls it, ‘the past and its dead have once again become present to us’ (Reference Öhman2024, 15). The nature and finitude of living memory as it is embedded in the media of the day is rendered forever uncertain. It is important to also recognise the ease today through which any individual can create a deadbot of themselves or of a loved one, with minimal technical knowledge, through readily available and useable tools (De Figueiredo Reference De Figueiredo2024).

Deadbots created for memory preservation, are seen as different from ‘deepfakes’ (Meikle Reference Meikle2022) which tend to refer to non-interactive synthetic media created for the purposes of misinformation or entertainment. Yet deepfakes also contribute to the deadbot memory boom, in the mass proliferation of convincing audio/visual versions of individuals, for instance, in reshaping a memory of them through putting new words into their mouth, from a past that never existed.

There are two principal forms of deadbot, prospective and retrospective. The retrospective is how your digital participation has left an astonishing array of images, audio, and video of you scattered, to be found, connected, and used to feed generative AI transformer models, once you are no longer alive. The prospective, in contrast, requires us to focus on the living, in how and why individuals are training an avatar or chatbot to simulate them once they are dead.

An example of a prospective deadbot platform is Hereafter.AI, which promises to ‘Let loved ones hear meaningful stories by chatting with the virtual you’.Footnote 30 Such services are increasingly app-based, with the user training the AI in response to prompts and questions from a ‘virtual interviewer’, as well as uploading images and videos. These services are often marketed as offering a kind of total memory and the idea of an eternal you (Lagerkvist Reference Lagerkvist and Hoskins2017).

An example of a retrospective form of deadbot is how some parents, who have lost their children in gun shootings in the United States, have used an AI voice generator to deepfake their dead children's voices for use in automated telephone calls to lawmakers as part of a campaign to push for greater gun control (Stern Reference Stern2024). Although these cases may sound like science fiction, it is the reality of generative AI that is being deployed to weaponize memory today, of past voices and images of the dead, remade for the ends of the living.

The deadbot memory boom is just one feature of a trend towards the end of human forgetting, in terms of the proliferation of versions of ourselves, and their potential to persist and change, after our death. The latest developments in generative AI suggest that models are forging a long-term memory, rather than just remembering exchanges within a given conversation. ‘ChatGPT Memory’ is a feature which stores a long-term memory of personal details that you share and so will be able to ‘personalise’ the conversation that you have with it.Footnote 31 This memory boom raises a huge number of legal, ethical, moral, social, technological, and political questions, including: Who might be responsible for your deadbot and for how long? What rights do you have and conversely how accountable are you – or at least your remaining family – to what the deadbot might say or reveal? And relatedly, how secure is your deadbot – is it vulnerable to perpetual hacking and inserting you in a past that never existed? Furthermore, as Kneese (Reference Kneese2023) argues, these services come at a cost to the living, in the human labour required for their operation, and the wider maintenance and the environmental costs in all forms of digital production.

AI services not only render the boundaries of individual human memory uncertain, but they also reimagine and remix the relationship between individual and collective memory. For example, Wired magazine advises that if you share a ChatGPT account with friends or family, then you should turn off the Memory option, as: ‘With Memory activated, the chatbot might blend all the details from multiple interactions into one composite understanding of who the user is’ (Rogers Reference Rogers2024).

In this way, AI deepens the risks associated with what I call ‘grey memory’ (Hoskins and Halstead Reference Hoskins and Halstead2021). Grey memory is how contemporary technologies push out of individual human reach a conscious, active, willed memory, through obscuring the risks of the ownership, use, access, costs, and finitude of digital data. The present and the future forged from all of the pasts of ours and other's digital participation and trails seem incredibly uncertain and unpredictable, despite the immediacy and convenience of our continuous use of apps, services, and platforms affording a sense of agency and control over our proliferating digital selves. The new conversational past (above) feels benign in its sociable affordances.

To adopt the perspective of the third way of memory is to recognise how AI offers human memory new liberating imaginaries, forms, and horizons. The displaced, denied, and precious past can be revisioned, remade, and re-experienced, with astonishing ease, rejoining individual and collective memory. For instance, oral testimony and storytelling about past experiences, events, and relationships, vital to the formation of identity, belonging, the assuaging of trauma, ‘moving on’, as well as sheer nostalgia, can be translated into a new kind of anchoring vision and record.Footnote 32

This retrospective use of AI in shaping memory is also joined by a new prospective use of AI to generate imaginaries of what events and experiences might or should look like. There is a body of work in media and communication studies that considers how media (images, templates, narratives) are used to ‘premediate’ (Brown and Hoskins Reference Brown and Hoskins2010; Erll Reference Erll, Erll and Nünning2008; Grusin Reference Grusin2004, Reference Grusin2010; Hoskins and O'Loughlin Reference Hoskins and O'Loughlin2009) what is to come, to help make the future plausible and thus more manageable and controllable. Premediation in essence protects from future shocks through the development of a greater preparedness for what might come based on previous experience.

The idea of retrospective and prospective memory also has a long tradition of work in the cognitive sciences.Footnote 33 Prospective memory is remembering to undertake a future task, whereas retrospective memory is the capacity to recall something that was previously learned (Shum and Fleming Reference Shum, Fleming, Kreutzer, DeLuca and Caplan2011). Furthermore, Conway et al. (Reference Conway, Loveday and Cole2016, 257) identify a ‘human remembering imagining system’. This is an extended form of consciousness that consists of memories of the recent past and images and expectations of the near future. Thus, memory, society, and culture constrain the range of possible futures by providing the context in which the future will most probably occur.

Generative AI goes beyond premediating the future, or offering a context for it, by creating a record, a memory of it, in the wild. This includes the use of AI to generate photographs depicting future events such as weddings and birthdays with individuals who are unlikely to live to experience them because of their suffering from incurable illnesses (Bryan Reference Bryan2024). In this way, human imaginaries are translated into human-machinic visionaries, outside the head and beyond the lifespan. I now turn to address a key example of this translation in the retrospective use of AI to generate photographs of a lost and fading past.

Glitch memories

In recent years, there has been increasing interest in how external stimuli in the form of digital tech are messing with the cognitive system. There has been a significant acceleration in psychological work testing the influence of media forms, technologies, and practices, including internet usage, on the capacity and reliability of human memory (Fawns Reference Fawns2022; Marsh and Rajaram Reference Marsh and Rajaram2019; Rajaram Reference Rajaram, Wang and Hoskinsin press; Risko et al. Reference Risko, Kelly, Lu, Pereira, Wang and Hoskinsin press; Storm et al. Reference Storm, Bittner, Yamashiro, Wang and Hoskinsin press; Stone and Zwolinski Reference Stone and Zwolinski2022; Wang Reference Wang2022). Experimental work in psychology by Henkel (Reference Henkel2014) for instance, shows that the act of photographing objects in a museum adversely affected the individual memory of the objects.

Schacter (Reference Schacter2021) provides a commanding review of psychological work on the impact of media and technology on four of his ‘sins’ of memory, developing from his famous ‘Seven Sins of Memory’, notably seven categories of error or distortion of individual human remembering. The first three sins are ones of omission, so: transience, absentmindedness, and blocking. The next four are sins of commission: suggestibility, bias, persistence, and misattribution. Schacter (Reference Schacter2021), however, is sceptical as to the broader effects of shifts in media and technologies to change our general outlook of reality and the shape of our memories in what he labels as the ‘domain general’. In a Reference Schacter2021 interview with me for Memory, Mind & Media, Schacter explained that there is no evidence to indicate that a broad effect of ‘engaging with and frequently experiencing social media, eventually fundamentally alter[ing] the underlying experience of memory in multiple domains’ is happening at this stage, although he also stated that it is not ‘inconceivable’ that this domain general effect could ‘eventually happen’.Footnote 34

Since this interview, as I remark above, it is generative AI which marks a step change, in the creation of high-quality text, images, and other content such as voice recordings, based on data they were trained on. Surely, this marks a transformation in the ‘domain general’ of memory, in the process of prompting the creation of something new, in relation to things written, spoken, experienced, and recorded in the past. The inputs of this new memory are our scattered ‘public’ selves as an astonishing archive that AI transformer models are ‘transforming’ into an emergent blended human-machinic version of what was, in the present.

Another way to approach this question of the impact of external stimuli on human memory is to consider how media in a more fundamental way re-orient and distort our sense of reality. For example, Scott (Reference Scott2015) argues that digital technologies are reshaping what it is to be human to the extent that we are now ‘four dimensional’:

The fourth dimension doesn't sit neatly above or on the other side of things. It isn't an attic extension. Rather, it contorts the old dimensions. And so it is with digitization, which is no longer a space in and out of which we clamber, via the phone lines. The old world itself has taken on, in its essence, a four-dimensionality … Increasingly, the moments of our lives audition for digitization. A view from the window, a meeting with friends, a thought, an instance of leisure of exasperation – they are all candidates, contestants even, for a dimensional upgrade (Reference Scott2015: xv).

This utter revisioning and reimagining of the world includes the past itself now susceptible to a participative ‘multitude’ (Hoskins Reference Hoskins and Hoskins2017b) continually remaking it. This is both in relation to the digital connectivity of the 2010s, and the emergent chatbot equipped, more sharded (Merrin and Hoskins Reference Merrin and Hoskins2024) sense of the relationship between individuals and AI in the mid-2020s. The past is scraped, mined, and used to train AI models: an almighty aggregation, yes, but it is also splintered, fractured, and personalised, through millions of chatbot interactions. Perhaps connective memory (Hoskins Reference Hoskins2011) has given way to sharded memory in the AI era.

Both (connecting and sharding) trends contrast with an earlier (‘broadcast’) media era: ‘Whilst broadcast era production was standardised, uniform and finished, production in the post-broadcast era is marked by the rise of customisation, personalisation and “the perpetual beta”’ (Merrin Reference Merrin2014, 1). In the same way that memory is considered an ‘ongoing process’,Footnote 35 rather than something fixed or finite, it is equally productive to see media and communication as processual, as never-ending. This allows us to grasp the radical effects of AI more easily in giving the past an entirely new ‘dimension’ (in Scott's terms) or even ‘domain’ (in Schacter's terms) that remake the (ongoing) relationship between humans and machines.

For me, this contributes to a third way of memory, that is, to realise how AI offers new contexts, new forms, and new imaginaries of the past at that intersection between human and machine. The faded, the fading, the blocked, the lesioned, the traumatised, the displaced, all the ways in which individuals and communities struggle to recover a lost past, to reimagine that for which there were no records, or which have been lost or destroyed. AI gives new hope to memory, to overcoming forgetting, but it also gives the past a new (synthetic) shape.

For instance, the Barcelona-based design studio Domestic Data Streamers has run their ‘Synthetic Memories’ project since 2022.Footnote 36 Using generative AI image models, the studio works with displaced and immigrant communities to recreate photographs lost when families moved, or even of experiences and events that were never visually documented. A person describes an experience or event, and an engineer draws on each recollection to write a prompt for a model, which generates an image (Heaven Reference Heaven2024), But unlike AI tools such as Google Magic ErasureFootnote 37, the aim is not to create a sharply focused or idealised image of a memory. Rather, the studio used Stability AI generative image models DALL-E 2 and Stable Diffusion, from 2022, which can produce glitchy images, with misshapen faces and not quite formed bodies. The resulting images are more like the blurred imaginaries of individual memory in the head, instead of fully formed and fixed vision, often associated with the photograph. Pau Garcia, director of Domestic Data Streamers, kindly spoke with me and explained:

What is important is not the clarity and realism, but the emotional truth that is embedded into it..it's got this more blurry, undefined quantum imaginary where things are transforming all the time. I think memory works a bit like this. It's not like it is fixed into something, it is something that is changing. You look at it and it has one shape, but you look again and it's a bit different. I think this glitchiness that is quite evident in the models of artificial intelligence is very helpful.Footnote 38

A good example is the experience of Carmen, aged 94 when she spoke to Pau Garcia in 2023. She recounted her story that when she was a six-year-old her mother used to pay another family, so they could enter a house in Barcelona and go up to its balcony. This balcony was important as it had a view facing la Modelo prison. During that time of the Spanish dictatorship, it was a political prison. Carmen's father was a doctor for the anti-fascist front and was being held there. The only way that they could see each other was from the balcony and the window of the prison. In response, Domestic Data Streamers generated a description, which prompted an old photograph of a mother and a daughter on a balcony in Barcelona. But it was not until Carmen saw a blurry, glitch version of this scene (Figure 1, below) that she could really recognise what was before her as triggering the memory.

Figure 1. Image of Carmen and her mother at a balcony looking across to la Modelo prison, Barcelona, as recreated by Domestic Data Streamers (image reproduced here with kind permission of Domestic Data StreamersFootnote 39).

AI-produced glitch memories are like a reversal of ‘flashbulb memories’ (Brown and Kulik Reference Brown and Kulik1977; Conway Reference Conway1995; Reference NeisserNeisser 1982/2000), namely human memories recalled so vividly and with such clarity, they are said to possess a ‘photographic’ quality (Hoskins Reference Hoskins2009b). Glitch memories instead are rendered through the natural flaws and decline of human memory in interaction with generative image models’ emergent and messy translation of human prompts. The result looks like a distorted photograph, as though key identifying features have been smudged. Entangling the human and the technology in new relations to produce a negotiated and humanly recognisable vision of the past, is an example of the third way of memory, in overcoming human forgetting and giving remembering new potential and new hope.

Conclusion: can the past be saved?

Generative AI changes what memory is and what memory does, pushing it beyond the realm of individual, human, influence, and control, yet at the same time offering new modes of expression, conversation, creativity, and ways of overcoming forgetting. This is part of a battleground between humans and computers in the shaping of reality.

I have argued that it becomes increasingly important in the AI-defined era of the 2020s and beyond to see how individuals and societies, through new interfaces, are suddenly being confronted with an uncanny past, one that is familiar yet strange. This third way of memory of new human-machinic and individual-collective conflagrations offers an apparent panacea for new imaginaries of what was and what could have been, vital for new ways of putting the past to rest, for moving on, and for (re)discovering and remaking all that was precious yet since blocked or lost.

Yet, these developments come at a profound human cost. The same technologies, which affect the workings of a conscious, active, willed memory, also obscure the risks of the ownership, use, access, costs, and finitude of the past, a greying of memory.

Furthermore, the third way of memory involves a rapidly accumulating force of AI agents, with increasing autonomy and infinite potential to remake and repurpose individual and collective pasts, beyond human consent and control. An emergent retrospective and prospective deadbot memory boom, as I have called it, is collapsing the traditional individual, social, archival, and generational boundaries of the past, generating a new conflict over living memory.

In very recent years, the battleground between humans and computers in the shaping of the reality of memory, is evident in policy, as well as in individual attempts, to wrest back human control. On the one hand, there are proponents of a ‘right to be forgotten’Footnote 40 (we are haunted by our digital traces) and on the other those who fear a ‘digital Dark Age’Footnote 41 (obsolescence of software and hardware rendering the digital past inaccessible). The European 2012 legislation on Right to Be Forgotten (RtbF), which was once considered to be a breakthrough in the preservation of personal information online, is no longer adequate, if it ever was (Hoskins Reference Hoskins, Ghezzi, Guimarães and Vesnić-Alujević2014). This right (later enshrined in Article 17 of the EU General Data Protection Regulation) gives a person the right to have their personal data deleted in certain circumstances.Footnote 42

But in the AI era, it is difficult to imagine an effective form of a right to be forgotten. This is owing to the ambiguous status of ownership of digital content, including that which has been published or shared on public platforms with a limited grasp of consent for or the nature of its future use, including in its amalgamation with other content and in the feeding of AI's memory. There is an anti-autobiographical future in which it is impossible to extract your/self from the chatbot of you, and the chatbots of others.

This begs the question – what can be done to protect and preserve all our pasts and our past selves increasingly vulnerable to AI's rendering available to all at a new scale? For instance, generative AI threatens the collective memory of the hibakusha (above) with the risks of impersonation, deep fakes, and testimony put to new ends, at a critical stage at the end of the life of a generation of survivors of atomic bombings. I have proposed that special status be afforded through Japanese legislation to protect the hibakusha and their living memory, so that their words, voices, and images are not endlessly remade, for all kinds of purposes, including those that they never intended or imagined.Footnote 43

Yet the AI past is fast outpacing the capacity of policymakers and regulators to legislate to offer some kind of human-scale stability and security to how and why individuals remember and forget. Moreover, the emergent third way of memory, I have argued, makes for an irresistible past, whose shape, provenance, agency, ownership, uses, and abuses, are wildly at stake.

Acknowledgments

I am very grateful for the constructive and detailed help and advice on this article from three anonymous reviewers, and I am indebted to Amanda Barnier, for her meticulous steering of this work through the reviewing process and for her own extensive feedback and advice.

I am also grateful for the generous detailed feedback and advice I have received from Katerina Linden, Anthony Downey, Danny Pilkington, William Merrin, Amanda Lagerkvist, Martin Pogacar, Leighton Evans, Tim Peacock, and Geoffrey C. Bowker. Thank you also to those who gave up their time to speak with me about their work and experiences, including Suman Kanuganti, CEO of Personal.AI, Pau Garcia (Founder and Director) and Self Else (AI researcher) at Domestic Data Streamers, Mrs. Yoshiko Kajimoto, and Dan Schacter.

This work has also benefitted from feedback from many talks and keynotes I have given over the past few years, including most recently: Keynote ‘AI & Memory’: Achievements and Perspectives of Cultural and Social Memory Research Conference of the Research Network ‘Handbuch Sozialwissenschaftliche Gedächtnisforschung’ and the Working Group ‘Social Memory, Remembering and Forgetting’ of the German Sociological Association (DGS), Technical University of Berlin, 28 September 2023; Public Lecture: ‘A-bombings Memory and Peace’: The Hiroshima Peace Memorial Museum, 15 March 2024, and ‘AI & Memory’ presentation to the AI Horizons: Navigating the Intersection of Artificial Intelligence and the Humanities, Symposium, Swansea University, 20 June 2024. Thank you to Gerd Sebald, Luli van der Does, and to Leighton Evans, respectively, for organising these events and for inviting me.

Funding

This work has not received any specific grant from any funding agency, commercial or not-for-profit sectors. As Co-Editor-in-Chief of the journal in which this article appears, I declare that I was not involved in the selection of, or communication with, the three peer reviewers for the external double-blind peer review of this research article, which was managed by Professor Amanda Barnier.

Andrew Hoskins is a Professor of Global Security at the University of Glasgow, UK. From January 2025 he will take up a Chair in AI, Memory and War, at the University of Edinburgh, UK. He is the founding Co-Editor-in-Chief of the Journal of Memory, Mind & Media. He is the author/editor of 10 books, including Radical War: Data, Attention & Control in the Twenty-First Century (Hurst/OUP 2022, with Matthew Ford) and The Remaking of Memory in the Age of the Internet and Social Media (OUP 2024, co-edited with Qi Wang).

Footnotes

1 The term ‘artificial intelligence’ (AI) many trace to the US computer scientist John McCarthy (1927–2011) and his 1955 definition of AI as ‘the science and engineering of making intelligent machines, especially intelligent computer programs’ (McCarthy, Reference McCarthy2007).

6 Stable diffusion online (stablediffusionweb.com)

7 I use term ‘memory ecology’ to emphasise how remembering and forgetting are processes entangled with the technologies and media of a given time and environment (Brown and Hoskins, Reference Brown and Hoskins2010; Hoskins, Reference Hoskins and Meade2017a, Reference Hoskins, Wang and Hoskinsin press).

8 I first used this term in an invited paper: Andrew Hoskins (2019) Public Lecture, ‘The Algorithmic Past: The Third Way of Memory’, POEM Network, University of Glasgow, UK, 26 March, https://www.poem-horizon.eu/public-talk-the-algorithmic-past-the-third-way-of-memory/.

16 Suman Kanuganti, CEO of Personal.AI, interviewed by Andrew Hoskins, 20 May 2023.

17 https://policies.google.com/privacy (version effective 28 March 2024).

20 Mrs. Yoshiko Kajimoto interviewed by Andrew Hoskins with Luli van der Does, 14 March 2024, Hiroshima. This research project is a collaboration with Dr. van der Does, The Center for Peace, Hiroshima University exploring remembering and forgetting of the atomic bombings of Japan.

26 There are a range of alternative terms for ‘deadbot’ being used in different disciplines and in news stories (see also Hollanek and Nowaczyk-Basińska (Reference Hollanek and Nowaczyk-Basińska2024). Savin-Baden (Reference Savin-Baden2022, 143–144), for example, uses the term ‘griefbot’, which she defines as that which is ‘created using a person's digital legacy from social media content, text messages and emails’. My defining and using of ‘deadbot’ here are to highlight the conditions and potential consequences following the generative AI turn, including in the rapidly developing potential for AI agents to make and remake memory today and in the future.

27 For an overview of work on ‘memory booms’, see Hoskins and Halstead (Reference Hoskins and Halstead2021).

32 See, for example, ‘Replika’ a personal AI ‘companion’, https://replika.com.

33 See also Tenenboim-Weinblatt's (Reference Tenenboim-Weinblatt2013) essay in Communication Studies on ‘mediated prospective memory’ exploring news media and journalism's memory work.

34 Dan Schacter interviewed by Andrew Hoskins for the Journal of Memory, Mind & Media, 29 June 2021 https://www.youtube.com/watch?v=KRnV4WiadqA&t=4s

38 Pau Garcia, interviewed by Andrew Hoskins, 28 May 2024.

43 Andrew Hoskins (2024) Public Lecture: ‘Forgetting Hiroshima: The crisis of living memory in the AI era. Hiroshima Peace Memorial Museum, Hiroshima, Japan, 15 March.

References

Agarwal, S (2023) I outsourced my memory to AI for 3 weeks. Business Insider. https://www.businessinsider.com/i-outsourced-my-memory-remember-what-you-read-using-ai-2023-1.Google Scholar
Albano, C (2022) Topologies of air: Shona Illingworth's art practice and the ethics of air. Digital War 5, 150165 (2024). https://doi.org/10.1057/s42984-022-00053-6.CrossRefGoogle Scholar
Assmann, A (2014) Forms of Forgetting. H401. https://h401.org/2014/10/forms-of-forgetting/7584/.Google Scholar
Barnier, AJ (2010) Memories, memory studies and my iPhone: Editorial. Memory Studies 3(4), 293297. https://doi.org/10.1177/1750698010376027.CrossRefGoogle Scholar
Barnier, AJ and Hoskins, A (2018) Is there memory in the head, in the wild? Memory Studies 11(4), 386390.CrossRefGoogle Scholar
Bartlett, FC (1932) Remembering: A Study in Experimental and Social Psychology. Cambridge: Cambridge University Press.Google Scholar
Bassett, DJ (2022) The Creation and Inheritance of Digital Afterlives. You Only Live Twice. London: Palgrave Macmillan.CrossRefGoogle Scholar
Birhane, A, et al. (2021) Multimodal Datasets: Misogyny, Pornography, and Malignant Stereotypes. arXiv:2110.01963v1. https://doi.org/10.48550/arXiv.2110.01963.CrossRefGoogle Scholar
Bowker, GC (2007) The past and the internet. In Karaganis, J (ed.), Structures of Participation in Digital Culture. New York: Social Science Research Council, pp. 2036.Google Scholar
Brown, SD and Hoskins, A (2010) Terrorism in the new memory ecology: Mediating and remembering the 2005 London bombings. Behavioral Sciences of Terrorism and Political Aggression 2(2), 87107.CrossRefGoogle Scholar
Brown, R and Kulik, J (1977) Flashbulb memories. Cognition 5, 7399.CrossRefGoogle Scholar
Bryan, N (2024) AI photos show people with secondary breast cancer their lost future. BBC News, 30 March. Available at https://www.bbc.co.uk/news/uk-wales-68609431.Google Scholar
Conway, MA (1995) Flashbulb Memories. Hove, East Sussex: Lawrence Erlbaum Associates Inc.Google Scholar
Conway, MA, Loveday, C and Cole, SN (2016) The remembering-imagining system. Memory Studies 9(3), 256265.CrossRefGoogle Scholar
de-Lima-Santos, MF and Ceron, W (2021) Artificial intelligence in news media: Current perceptions and future outlook. Journalism and Media 3(1), 1326. https://doi.org/10.3390/journalmedia3010002.CrossRefGoogle Scholar
De Figueiredo, M (2024) Our last, impossible conversation. The New York Times, 22 March. Available at https://www.nytimes.com/2024/03/22/style/modern-love-ai-our-last-impossible-conversation.html.Google Scholar
de Zúñiga, HG, Goyanes, M and Durotoye, T (2023) A scholarly definition of artificial intelligence (AI): Advancing AI as a conceptual framework in communication research. Political Communication 41(2), 317334. https://doi.org/10.1080/10584609.2023.2290497.CrossRefGoogle Scholar
Erdelyi, MH (2006) The unified theory of repression. Behavioral and Brain Sciences 29, 499551.CrossRefGoogle ScholarPubMed
Erll, A (2008) Literature, film, and the mediality of cultural memory. In Erll, A and Nünning, A (eds), Cultural Memory Studies: An Interdisciplinary Handbook. Berlin: Walter de Gruyter, pp. 389398.CrossRefGoogle Scholar
Erll, A (2022) The hidden power of implicit collective memory. Memory, Mind & Media 1, e14. doi:10.1017/mem.2022.7CrossRefGoogle Scholar
Fawns, T (2022) Remembering in the wild: Recontextualising and reconciling studies of media and memory. Memory, Mind & Media 1, e11. doi:10.1017/mem.2022.5CrossRefGoogle Scholar
Grusin, R (2004) Premediation. Criticism 46(1), 1739.CrossRefGoogle Scholar
Grusin, R (2010) Premediation: Affect and Mediality After 9/11. Basingstoke: Palgrave Macmillan.CrossRefGoogle Scholar
Halbwachs, M (1980) The Collective Memory. Translated by Francis J. Ditter, Jr., and Vida Yazdi Ditter. London: Harper & Row.Google Scholar
Heaven, WD (2024) Generative AI can turn your most precious memories into photos that never existed. MIT Technology Review. 10 April, Available at https://www.technologyreview.com/2024/04/10/1091053/generative-ai-turn-your-most-precious-memories-into-photos.Google Scholar
Heikkilä, M (2024) What are AI agents? MIT Technology Review. 5 July, Available at https://www.technologyreview.com/2024/07/05/1094711/what-are-ai-agents/.Google Scholar
Henkel, LA (2014) Point-and-shoot memories: The influence of taking photos on memory for a museum tour. Psychological Science 25, 396402.CrossRefGoogle ScholarPubMed
Hollanek, T and Nowaczyk-Basińska, K (2024) Griefbots, deadbots, postmortem avatars: On responsible applications of generative AI in the digital afterlife industry. Philosophy & Technology 37, 63. https://doi.org/10.1007/s13347-024-00744-w.CrossRefGoogle Scholar
Hoskins, A (2009a) The mediatization of memory. In Garde-Hansen, J, Hoskins, A and Reading, A (eds), Save As . . . Digital Memories. Basingstoke: Palgrave Macmillan, pp. 2743.CrossRefGoogle Scholar
Hoskins, A (2009b) Flashbulb memories, psychology and media studies: Fertile ground for interdisciplinarity? Memory Studies 2(2), 147150.CrossRefGoogle Scholar
Hoskins, A (2011) Media, memory, metaphor: Remembering and the connective turn. Parallax 17(4), 1931.CrossRefGoogle Scholar
Hoskins, A (2013) The end of decay time. Memory Studies 6(4), 387389. https://doi.org/10.1177/1750698013496197CrossRefGoogle Scholar
Hoskins, A (2014) The right to be forgotten in post-scarcity culture. In Ghezzi, A, Guimarães, PA and Vesnić-Alujević, L (eds), The Ethics of Memory in a Digital age: Interrogating the Right to be Forgotten. Basingstoke: Palgrave Macmillan, pp. 5064.Google Scholar
Hoskins, A (2015) Archive me! Media, memory, uncertainty. In Hajek, A, Lohmeier, C and Pentzold, C (eds), Memory in a Mediated World: Remembrance and Reconstruction. Basingstoke: Palgrave Macmillan, pp. 1335.Google Scholar
Hoskins, A (2017a) Digital media and the precarity of memory. In Meade, M (ed.), Collaborative Remembering: Theories, Research, and Applications. Oxford: Oxford University Press, pp. 371385. https://doi.org/10.1093/oso/9780198737865.003.0021.CrossRefGoogle Scholar
Hoskins, A (2017b) Memory of the multitude: The end of collective memory. In Hoskins, A (ed.), Digital Memory Studies: Media Pasts in Transition. New York: Routledge, pp. 85109.CrossRefGoogle Scholar
Hoskins, A (2023) New memory and the archive. In Prescott, A and Wiggins, A (eds), Archives: Power Truth and Fiction. Oxford: Oxford University Press, pp. 8799.CrossRefGoogle Scholar
Hoskins, A (in press) The forgetting ecology: Losing the past through digital media and AI. In Wang, Q and Hoskins, A (eds), The Remaking of Memory in the Age of the Internet and Social Media. Oxford: Oxford University Press, pp. 2641.Google Scholar
Hoskins, A and Halstead, H (2021) The new grey of memory: Andrew Hoskins in conversation with Huw Halstead. Memory Studies 14(3), 675685. https://doi.org/10.1177/17506980211010936.CrossRefGoogle Scholar
Hoskins, A and Illingworth, S (2020) Inaccessible war: media, memory, trauma and the blueprint. Digital War 1(1), 7482. https://doi.org/10.1057/s42984-020-00025-8.CrossRefGoogle Scholar
Hoskins, A and O'Loughlin, B (2009) Television and Terror: Conflicting Times and the Crisis of News Discourse. Basingstoke: Palgrave Macmillan.Google Scholar
Illingworth, S (2015) Amnesia Museum installation, part of Lesions in the Landscape Exhibition, FACT, Liverpool, 18 September–22 November.Google Scholar
Kasket, E (2019) All the Ghosts in the Machine: Illusions of Immortality in the Digital Age. London: Robinson.Google Scholar
Kneese, T (2023) Death Glitch: How Techno-Solutionism Fails Us in This Life and Beyond. Yale: Yale University Press.Google Scholar
Lagerkvist, A (2014) The Netlore of the infinite: Death (and beyond) in the digital memory ecology. New Review of Hypermedia and Multimedia 21(1–2), pp. 185195. https://doi.org/10.1080/13614568.2014.983563.CrossRefGoogle Scholar
Lagerkvist, A (2017) The media end: Digital afterlife agencies and techno-existential closure. In Hoskins, A (ed.), Digital Memory Studies: Media Pasts in Transition. New York: Routledge, pp. 4884.CrossRefGoogle Scholar
Lagerkvist, A (2022) Existential Media: A Media Theory of the Limit Situation. Oxford: Oxford University Press.CrossRefGoogle Scholar
Lowenthal, D (2012) The past made present. Historically Speaking 13(4), 26.CrossRefGoogle Scholar
Magni, F, Park, J and Manchi Chao, M (2023) Humans as creativity gatekeepers: Are we biased against AI creativity? Journal of Business and Psychology 39, 643656. https://doi.org/10.1007/s10869-023-09910-x.CrossRefGoogle Scholar
Marsh, EJ and Rajaram, S (2019) The digital expansion of the mind: Implications of internet usage for memory and cognition. Journal of Applied Research in Memory and Cognition 8(1), 114. https://doi.org/10.1016/j.jarmac.2018.11.001.CrossRefGoogle Scholar
McCarthy, J (2007) What is artificial intelligence? Available at https://www-formal.stanford.edu/jmc/whatisai.pdf.Google Scholar
McDermott, KB and Roediger, HL (2018) Memory (encoding, storage, retrieval). In Butler, A (ed.), General Psychology. Valparaiso University, pp. 117140. https://core.ac.uk/reader/303864230.Google Scholar
Meikle, G (2022) Deepfakes. Cambridge: Polity Press.Google Scholar
Merrin, W (2014) Media Studies 2.0. London: Routledge.CrossRefGoogle Scholar
Merrin, W (2021) Hyporeality, the society of the selfie and identification politics. Mast: The Journal of Media Art Study and Theory 2(1), 1639. https://doi.org/10.59547/26911566.2.1.02.Google Scholar
Merrin, A and Hoskins, A (2024) Sharded war: Seeing, not sharing. Digital War 5, 115118. https://doi.org/10.1057/s42984-023-00086-5.CrossRefGoogle Scholar
Middleton, D and Brown, SD (2005) The Social Psychology of Experience: Studies in Remembering and Forgetting. London: Sage.CrossRefGoogle Scholar
Murphy, M (2022) What are foundation models? Available at https://research.ibm.com/blog/what-are-foundation-models.Google Scholar
Neisser, U (1982/2000) Memory Observed: Remembering in Natural Contexts. New York: W.H. Freeman and Company.Google Scholar
Öhman, C (2024) The Afterlife of Data. What Happens to Your Information When You Die and Why You Should Care. Chicago: The University of Chicago Press.CrossRefGoogle Scholar
Öhman, C and Floridi, L (2018) An ethical framework for the digital afterlife industry. Nature Human Behaviour 2, 318320.CrossRefGoogle ScholarPubMed
Rajaram, S (in press) Exploring online social interactions in the remaking of memory. In Wang, Q and Hoskins, A (eds), The Remaking of Memory in the Age of the Internet and Social Media. Oxford: Oxford University Press, pp. 179195.Google Scholar
Risko, EF, Kelly, MO, Lu, X and Pereira, AE (in press) Varieties of offloading memory: A framework. In Wang, Q and Hoskins, A (eds), The Remaking of Memory in the Age of the Internet and Social Media. Oxford: Oxford University Press, pp. 6176.Google Scholar
Rogers, R (2024) How to use ChatGPT's memory feature. Wired. 29 April. Available at https://www.wired.com/story/how-to-use-chatgpt-memory-feature/.Google Scholar
Savin-Baden, M (2022) AI for Death and Dying. London: CRC Press.Google Scholar
Schacter, DL (1987) Implicit memory; History and current status. Journal of Experimental Psychology: Learning, Memory and Cognition 13, 501518.Google Scholar
Schacter, DL (1996) Searching for Memory: The Brain, the Mind, and the Past. New York: Basic Books.Google Scholar
Schacter, DL (2021) Media, technology, and the sins of memory. Memory, Mind & Media 1, e1. https://doi.org/10.1017/mem.2021.3.CrossRefGoogle ScholarPubMed
Schacter, DL and Graf, P (1986) Effects of elaborative processing on implicit and explicit memory for new associations. Journal of Experimental Psychology: Learning, Memory, & Cognition 12, 432444.Google Scholar
Scott, L (2015) The Four-Dimensional Human: Ways of Being in the Digital World. London: William Heinemann.Google Scholar
Shum, DHK and Fleming, J (2011) Prospective memory. In Kreutzer, JS, DeLuca, J and Caplan, B (eds), Encyclopedia of Clinical Neuropsychology. New York: Springer, pp. 20562059. https://doi.org/10.1007/978-0-387-79948-3_1144.CrossRefGoogle Scholar
Shumailov, I, Shumaylov, Z, Zhao, Y, Gal, Y, Papernot, N and Anderson, R (2023) The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv:2305.17493. https://doi.org/10.48550/arXiv.2305.17493 (accessed 10 July 2024).CrossRefGoogle Scholar
Sisto, D (2020) Online Afterlives: Immortality, Memory, and Grief in Digital Culture. (First Published 2018 in Italian, Bonnie McClellan-Broussard (trans)). Cambridge, MA: The MIT Press.CrossRefGoogle Scholar
Stern, J (2024) ‘I died that day’ – AI brings back voices of children killed in shootings. The Wall Street Journal. 14 February. Available at https://www.wsj.com/tech/ai-brings-back-voices-of-children-killed-in-shootings-7d72cb8d.Google Scholar
Stone, CB and Zwolinski, A (2022) The mnemonic consequences associated with sharing personal photographs on social media. Memory, Mind & Media 1, e12. https://doi.org/10.1017/mem.2022.6.CrossRefGoogle Scholar
Storm, BC, Bittner, D-L and Yamashiro, JK (in press) The changing dynamics and consequences of memory retrieval in the age of the internet. In Wang, Q and Hoskins, A (eds), The Remaking of Memory in the Age of the Internet and Social Media. Oxford: Oxford University Press, pp. 7791.Google Scholar
Tenenboim-Weinblatt, K (2013) Bridging collective memories and public agendas: Toward a theory of mediated prospective memory. Communication Theory 23(2), 91111. https://doi.org/10.1111/comt.12006.CrossRefGoogle Scholar
Vallor, S (2024) The AI Mirror. How to Reclaim Our Humanity in an Age of Machine Thinking. Oxford: Oxford University Press.CrossRefGoogle Scholar
van Der Werf, T and van der Werf, B (2022) Will archivists use AI to enhance or to dumb down our societal memory? Curmudgeon Corner 37, 985988. https://doi.org/10.1007/s00146-021-01359-x.Google Scholar
Wang, Q (2022) The triangular self in the social media era. Memory, Mind & Media 1, e4. doi:10.1017/mem.2021.6CrossRefGoogle Scholar
Wertsch, J (2002) Voices of Collective Remembering. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Wong, M (2023) AI Is an existential threat to itself. The Atlantic. 21 June. Available at https://www.theatlantic.com/technology/archive/2023/06/generative-ai-future-training-models/674478/.Google Scholar
Wong, WH (2023) We, the Data: Human Rights in the Digital age. Cambridge, MA: MIT Press.CrossRefGoogle Scholar
Zawacki, Z (2023) Exposing the sexism in generative AI. Mozilla. 15 March. Available at https://foundation.mozilla.org/en/blog/exposing-the-sexism-in-generative-ai/.Google Scholar
Zittrain, J (2024) We need to control AI agents now. The Atlantic. 2 July. Available at https://www.theatlantic.com/technology/archive/2024/07/ai-agents-safety-risks/678864/.Google Scholar
Figure 0

Figure 1. Image of Carmen and her mother at a balcony looking across to la Modelo prison, Barcelona, as recreated by Domestic Data Streamers (image reproduced here with kind permission of Domestic Data Streamers39).