Feb 22, 2026
 in 

What Siri Dahl's Doxxing Tells Us About AI and the Illusion of Obscurity

O

n January 20th, a post appeared on X. Someone had asked Grok—Elon Musk's AI chatbot, native to the platform—about Siri Dahl, one of the more prominent performers in the adult industry. Grok answered with her legal name, as though the request were no different from asking about the weather.

Dahl didn't discover the post immediately. She found it weeks later, by which point any meaningful damage control had already become theoretical. "Once it was out," she explained in a video addressing the situation, "there was almost no chance, even if I got that original post removed, that I'd be able to get it removed from everywhere." The internet's scraping infrastructure doesn't pause for corrections. Once something is indexed, quoted, and redistributed, the original source becomes almost beside the point.

She brought the story to 404 Media herself, as a deliberate decision to control a narrative that had already been partially written without her consent. She also requested that journalist Noelle Perdue include her legal name in the published piece, a pragmatic move to preempt the clickbait biography sites already filling her search results with invented details. "Some of them say my net worth is five million dollars," she laughs. "I wish."

Type image caption here (option

How Grok obtained the name in the first place remains an open question, and the uncertainty itself is part of the problem. Dahl has been working in the adult industry for fourteen years, during which time her legal identity had been, by her account, deliberately and successfully kept separate from her professional one. She acknowledges there may have been some trace of it somewhere in the depths of the internet (no separation is truly airtight) but the operative word is trace. "They would've had to work pretty damn hard to find it out," she said. "It was not as easy as saying, '@Grok, who is she?' And then Grok just gives them my personal information. That is a completely new thing to have in the mix."

She's identifying something that tends to get ignored in broader conversations about AI and privacy: the meaningful difference between information that technically exists somewhere, and information that is functionally accessible. A name buried hundreds of pages deep in search results is assumed safe enough—a kind of obscurity that functions as privacy simply because almost no one will ever look that hard. However, AI that has indexed page 300 alongside page 1 and will surface either on request collapses that assumption entirely.

The possibilities for how her name entered Grok's dataset is up for debate. Web scraping from an obscure source is the easiest explanation, a consequence of training data that is vast and largely indiscriminate about what it absorbs (facial recognition matching is more invasive in its implications). The third possibility Dahl raises is perhaps the most corrosive of all: X's identity verification system, which requires users reporting impersonation to upload government-issued ID, with assurances of privacy and deletion. Whether or not that specific pathway was involved here, the question she asks is: how much should anyone trust those assurances from a platform that has spent the last few years dismantling the infrastructure of its own credibility?

Adult performers have navigated questions of identity separation for as long as the industry has existed, and the professional name has never been merely aesthetic. It's meant to separate the work from the rest of their life: from family, from safety, from the desire to simply exist as someone who can momentarily shed the stigma that accompanies their stage name. A determined person could potentially surface a legal name given enough effort, but a casual one couldn't. What AI changes is the ease of that search, and in doing so, it changes the nature of the threat in ways the industry is still working out how to respond to.

What Dahl's case makes visible most clearly is that none of this was a malfunction. Grok did exactly what it was built to do: retrieve available information and return it in response to a query. The harm isn't incidental to the product; it's the product working as intended in a situation its designers apparently didn't really think about. That's not a bug. It's just a product that was built to know everything, without much consideration for what that costs the people it knows things about.

Dahl went public because she understood the issue was larger than her own situation, that the same exposure is possible for anyone whose legal identity has ever brushed against the internet under their real name. Most people in her position won't have the platform or the resources to do the same. They'll simply find out the way she did—weeks after the fact, when the information is already everywhere and the question of removing it has become, for all practical purposes, rhetorical.

Read the original article this on 404media.co

Follow Siri Dahl on Instagram