Is AI That Continues Social Media Activity in Place of the Dead Not Technological Progress, but a Retreat of Boundaries?

Introduction

A patent obtained by Meta has sparked controversy: it describes a language model trained on a user’s past posts and reactions that can post or respond on that person’s behalf during their absence. The patent text explicitly states that this could apply not only during prolonged absence, but also in cases where the user has died, and the patent was granted in the United States on December 30, 2025. In response to the reporting, Meta has explained that it has no current plans to implement this technology.

What makes this topic feel so deeply unsettling is not simply that it is “creepy.” More importantly, it dangerously connects, within a single technological concept, questions of personal identity on social media, dignity after death, the grief of family and friends, and the revenue model of platform companies. To me, this patent feels less like a novel invention for the age of AI and more like a signal that tech companies are beginning to lay hands on boundaries they should never cross.

What This Patent Really Reveals

The core of this patent is not merely the automation of scheduled posts. Rather, it describes a system in which a bot calls up a language model that has been fine-tuned for an individual using that user’s past behavior, generates “how that person would respond” to the posts of others, and then publishes those responses in the user’s place. The patent text even notes that, as a result, other users might not realize that the person is absent.

What emerges here is neither AI assistance nor even AI acting on someone’s behalf. It is the staged continuation of a personhood by AI. And the target of that performance is not merely text generation. It is social relationships themselves. The idea is that the platform would imitate and keep circulating the “sense of this person” that is built up through comments, messages, and reactions.

What makes this idea so dangerous is that it treats what people leave behind on social media as nothing more than data resources. Certainly, posting history is data. But at the same time, it is also fragments of a person’s life, traces of relationships, and the accumulated sediment of time. To reuse that material as a posthumous output system confuses what is technically possible with what is socially acceptable.

Why “There Is No Intent to Deceive” Is Not Enough

Some of the reporting has suggested that, because the patent uses the word “simulation,” its primary purpose is not overtly to deceive others. That may be true. Perhaps making someone appear alive is not its sole purpose. Meta has also explained that obtaining a patent does not mean the technology will immediately become a product.

Even so, the problem remains. On social media, “being the person” is not simply a binary question of fraud or no fraud. Even if it were explicitly labeled, “This is AI,” people would still see a residue of the person’s identity if that AI kept reproducing the deceased person’s writing style, tone, and response patterns in ongoing exchanges with others. In other words, regardless of whether legal deception is established, psychological and relational confusion can easily arise.

What is more, the patent frames the issue as follows: when a user is away for a long time, followers’ experience suffers, and when a user dies, that effect becomes more serious and permanent. Turned around, this means absence and death are being understood as problems of declining engagement. That is where the deepest ethical discomfort lies. The very fact that a person’s death is first described as a loss of platform experience already reveals a skewed value judgment.

How Is This Different from Existing Memorial Accounts?

Facebook has long had mechanisms for handling accounts after death, such as converting them into memorialized accounts, designating a legacy contact, or deleting the account after death. In a memorialized account, the deceased does not log in, and the scope of what can be managed is limited.

The difference between those existing systems and the patented concept here is decisive. Memorialized accounts preserve traces of the deceased without generating new activity in their name. In that sense, they are a form of preserving memory. By contrast, AI-driven continuation of posting after death is a reenactment of activity. It is not preservation, but generation; not remembrance, but updating.

That difference is profound. When someone has died, the fact that they make no new statements is precisely what gives their remaining posts, photos, and messages meaning as authentic traces of the past. But if AI continues speaking afterward, the deceased person’s real past words and posthumously generated imitations begin to mix within the same account. At that point, the boundary becomes blurred: are we remembering the deceased, or are we consuming a platform-generated imitation of “what the deceased was like”?

Why Move in This Direction in the Age of AI Slop?

Today’s social media environment should be focused first on protecting what is genuinely human. Low-quality AI-generated content is already spreading, it is becoming harder to tell whose words are whose, and even as quantity increases, trust erodes. In that environment, introducing “AI that pretends to be the dead” would further destabilize the foundations of trust on these platforms.

What is especially serious is that the social meaning of death itself risks being diluted by a design built around continued posting. There is meaning in an account falling silent. There is meaning in updates stopping. That silence contains time for people to come to terms with loss. But if AI fills that silence, then the platform begins to intrude even into the process of grieving.

Of course, technologies that help comfort bereaved families or preserve memories may well have a place. But even for such purposes, what is needed should not be “acting as if one were the deceased person.” An archive that helps people remember, or a system that organizes records the person explicitly left behind while alive, is fundamentally different from a system that continues speaking in the person’s place after death. They may seem similar, but in reality they are entirely different.

It Cannot Be Dismissed Simply Because It Is “Only a Patent”

A patent is, after all, protection for an idea; it is not a promise of implementation. It is important to remain clear-eyed about that point. In fact, Meta has said it has no plans at present to move this forward.

Even so, patent applications and grants have significance. They reveal what kinds of futures a company considers worth exploring, and what kinds of domains it evaluates as potentially viable business territory. In this case, the patent was filed on November 29, 2023, and granted on December 30, 2025. This was not just an offhand idea or casual speculation. It was a concept advanced toward legal protection through a meaningful investment of time and cost.

That is precisely why this news should not be brushed aside as merely an eccentric story. The issue is not only whether Meta will launch this tomorrow. The issue is that the imaginative reach of tech companies has already extended this far. They are beginning to enclose, optimize, and patent even the expression of personhood after death as a target of service design. It is that direction itself that we need to question.

Conclusion

AI has value in supporting human creativity, work, and communication. But when it begins to fill even human absence, extend operation beyond death, and intervene in the very way human relationships come to an end, it exceeds the realm of mere convenience. At that point, the question is no longer what can be done, but what must not be done.

What social media companies should be confronting is not how to keep people posting after death. Rather, it is how to design respectful ways for accounts to end, for memories to remain, and for family and friends to draw a quiet line under loss. Human life has an end, and digital space, too, should observe that fact with restraint. If AI serves to blur that boundary, then I believe it is not progress, but a design philosophy from which we should retreat.