False claims, conspiracy theories and posts naming people with no connection to the incident spread rapidly across social media in the aftermath of conservative activist Charlie Kirk’s killing on Wednesday, some amplified and fueled by AI tools.

CBS News identified 10 posts by Grok, X’s AI chatbot, that misidentified the suspect before his identity, now known to be southern Utah resident Tyler Robinson, was released. Grok eventually generated a response saying it had incorrectly identified the suspect, but by then, posts featuring the wrong person’s face and name were already circulating across X.
The chatbot also generated altered “enhancements” of photos released by the FBI. One such photo was reposted by the Washington County Sheriff’s Office in Utah, which later posted an update saying, “this appears to be an AI enhanced photo” that distorted the clothing and facial features.
One AI-enhanced image portrayed a man appearing much older than Robinson, who is 22. An AI-generated video that smoothed out the suspect’s features and jumbled his shirt design was posted by an X user with more than 2 million followers and was reposted thousands of times.
On Friday morning, after Utah Gov. Spencer Cox announced that the suspect in custody was Robinson, Grok’s replies to X users’ inquiries about him were contradictory. One Grok post said Robinson was a registered Republican, while others reported he was a nonpartisan voter. Voter registration records indicate Robinson is not affiliated with a political party.
CBS News also identified a dozen instances where Grok said that Kirk was alive the day following his death. Other Grok responses gave a false assassination date, labeled the FBI’s reward offer a “hoax” and said that reports about Kirk’s death “remain conflicting” even after his death had been confirmed.
Most generative AI tools produce results based on probability, which can make it challenging for them to provide accurate information in real time as events unfold, S. Shyam Sundar, a professor at Penn State University and the director of the university’s Center for Socially Responsible Artificial Intelligence, told CBS News.
“They look at what is the most likely next word or next passage,” Sundar said. “It’s not based on fact checking. It’s not based on any kind of reportage on the scene. It’s more based on the likelihood of this event occurring, and if there’s enough out there that might question his death, it might pick up on some of that.”
X did not respond to a request for comment about the false information Grok was posting.
Meanwhile, the AI-powered search engine Perplexity’s X bot described the shooting as a “hypothetical scenario” in a since-deleted post, and suggested a White House statement on Kirk’s death was fabricated.
Perplexity’s spokesperson told CBS News that “accurate AI is the core technology we are building and central to the experience in all of our products,” but that “Perplexity never claims to be 100% accurate.”
Another spokesperson added the X bot is not up to date with improvements the company has made to its technology, and the company has since removed the bot from X.
Google’s AI Overview, a summary of search results that sometimes appears at the top of searches, also provided inaccurate information. The AI Overview for a search late Thursday evening for Hunter Kozak, the last person to ask Kirk a question before he was killed, incorrectly identified him as the person of interest the FBI was looking for. By Friday morning, the false information no longer appeared for the same search.
“The vast majority of the queries seeking information on this topic return high quality and accurate responses,” a Google spokesperson told CBS News. “Given the rapidly evolving nature of this news, it’s possible that our systems misinterpreted web content or missed some context, as all Search features can do given the scale of the open web.”
Sundar told CBS News that people tend to perceive AI as being less biased or more reliable than someone online who they don’t know.
“We don’t think of machines as being partisan or bias or wanting to sow seeds of dissent,” Sundar said. “If it’s just a social media friend or some somebody on the contact list that’s sent something on your feed with unknown pedigree … chances are people trust the machine more than they do the random human.”
Misinformation may also be coming from foreign sources, according to Cox, Utah’s governor, who said in a press briefing on Thursday that foreign adversaries including Russia and China have bots that “are trying to instill disinformation and encourage violence.” Cox urged listeners to spend less time on social media.
“I would encourage you to ignore those and turn off those streams, and to spend a little more time with our families,” he said.