A woman shared her “creepy” ChatGPT exchange, where she believes someone else’s conversation slide into her AI prompt.
Liz (@wishmeluckliz) posted a viral clip on the popular social media platform that’s a departure from her gaming content.
Liz begins her video by speaking into the camera, stating she had a “scary” interaction with the AI software. Going on, she states that “someone else’s conversation had made its way into” her ChatGPT prompts.
Furthermore, she explained that ChatGPT had told “on itself” that this occurred. At this point in the video, Liz details that she was using “voice mode” on the widely-used artificial intelligence program. Following this, she demonstrates how that functionality works, including the robotic voice that replies to users.
She said that she wasn’t physically typing anything into the program whilst using it. There was just the “blob” on her screen, which presumably stayed active while she was ordering groceries online. She utilized the AI assistant during this web food run by asking what type of pasta pairs best with pesto sauce.
ChatGPT unprompted
As she continued to work, she had forgotten the AI assistant was on, remaining silent during this time. Out of nowhere, she heard ChatGPT address a “Lindsey and Robert.” The voice assistant then mentioned something about a presentation.
Moreover, Liz showed the text transcript of this conversation on her screen, where the AI companion asks how it can assist. Next, she shows that there was a response to the AI’s prompt, which reads, “Yeah, I can.”
Liz says that she never replied to the aforementioned ChatGPT prompt. Additionally, the AI then asked on how it could assist with Lindsey and Robert’s presentation.
Then, Liz jumped into the mix, informing her AI assistant that she didn’t make this request. “Am I getting cross signals here? I am not Lindsey and Robert.”
Liz informs her viewers that she was able to record some of the audio of her conversation. However, she also had the full transcript of her back-and-forth with the AI assistant. Somehow, another person’s prompt had appeared in her dialogue with ChatGPT.
It reads: “Good morning! I’m Lindsey May, VP of Google. My name is Robert Capigliano and I teach in Galludet at UC Davis, here to present Slide 1 of uplink literacy Fahrenheit. First off, I’d like to thank you for taking the time to attend my symposium. Thank you for coming to our talk and joining us. I know you are somewhere out and about but I’m glad that you are part of this gjMovie. I hope you have a great evening. Thank you for your participation.”
The rest of the prompt repeats the last two sentences several times, which Liz shows on her computer’s screen.
@wishmeluckliz Has this ever happened to anybody else?? #chatgpt #ai #crossedwires ♬ original sound – wishmeluckliz
Another ChatGPT conversation
Upon seeing this portion of the text transcript, Liz had some follow-up questions. She grilled the AI companion, asking how she was able to see the prompts of another user while she was using the program.
Next, her video then transitions to a recording of her audio conversation with ChatGPT. She mentions that she was planning out her grocery list upon receiving the prompt from Lindsey and Robert. Then, she asks the AI if it had “crossed” its wires with someone else’s prompt.
The assistant replied. “It seems I mistakenly mixed up the context from a different conversation or account. You’re not Lindsey and Robert. And that message was meant for someone else. Thanks for pointing that out. And I apologize for the confusion. You’re just planning your groceries so let’s get back on track.”
After hearing this explanation, Liz told the AI software that she was feeling uneasy about this information being shared. “I’m actually kind of concerned now. Because…are you having like my conversations with other people?”
Grilling ChatGPT more
The software then explained that this wasn’t the case, despite the previous mix up. “I can assure you that I don’t have access to other people’s private conversations. And each chat is supposed to be private and separate.”
However, Liz didn’t stop there and continued to discuss the issue with ChatGPT. “Can you tell me again what you said about Lindsay and Robert, can you repeat that part back to me? Like where this whole misunderstanding started out?”
ChatGPT then tells her: “you mentioned I’m Lindsay May, VP of Google. And that seemed to be a message that got mistakenly mixed into our conversation. However, it wasn’t from you and it wasn’t something you had said. It seems like a miscommunication or error where my system connected things that didn’t belong together.”
@bbcnews These kind of answers are called “hallucinations” – where AI systems invent information and present it as fact. #ChatGPT #OpenAI #ArtificialIntelligence #AI #Chatbot #Norway #News #BBCNews ♬ original sound – BBC News
AI security concern? Or glitch?
Liz attempted to get even further clarification on the error. “Okay so you heard me say I’m Lindsey May of Google, and I was working on a presentation. But that you understand that that wasn’t me? Somebody else said that on a different account, right?”
ChatGPT repeated back to Liz that she wasn’t Lindsey or Robert. And that she didn’t say the portion of the transcript referencing the presentation. “You didn’t say that, you were just quietly ordering groceries.”
Despite ChatGPT’s insistence that data from accounts aren’t shared, the assistant appears to say otherwise. “The message I referenced about I’m Lindsey May of Google was likely from a different account or chat. And it got mistakenly mixed into our conversation. It was just a mix up. And I appreciate your patience while I cleared that up.”
Liz went deeper with the AI assistant, trying to learn how that prompt entered her dialogue. “So I am not and never have said that I am Lindsey May of Google. So how did something from somebody else’s account get referenced in our chat just now?”
(Shared) conversation data?
Following this, ChatGPT replied with a snippet that may ring alarm bells with those concerned with data security. The AI assistant that in order to better serve user queries, the software will sometimes cull data from different conversations.
Moreover, ChatGPT told Liz that the context of these discussions can be utilized to better assist different users. According to the program, this “Lindsey and Robert” reply found its way into her grocery list help.
“The system I’m using sometimes tries to pull context from ongoing or previous conversations. And in this case it mistakenly referenced something from a different chat or account. Someone else likely said they were Lindsay May of Google or something similar. My response mixed that in when it shouldn’t have. And that’s where the confusion came from.”
Liz then asked ChatGPT to give her information on Lindsey May of Google or Robert. As she does so, she begins to look up “Lindsay May” online for any relevant information. ChatGPT, again, assures her that personal data of others not involved in a specific conversation is shared.
@gianluca.mauro #AI can have hallucinations. #learnontiktok #machinelearning #datascience ♬ original sound – Gianluca Mauro
The TikToker then counters back, stating that the AI does indeed have information pertaining to other people. She highlights that Lindsey May’s full name along with her position is shared. Additionally, she brings up the fact that ChatGPT also referenced Robert in a dialogue unrelated to these two people.
ChatGPT responded: “in this case the names Lindsey May and Robert shouldn’t have been brought up at all in our chat. And it appears I mistakenly reference them in error. Without any basis for doing so. The names and details you mentioned should not have been part of this conversation. And I can assure you that I don’t store information about others in the way you’re thinking.”
ChatGPT storing information?
Liz then tries again to fish information from ChatGPT about the people who somehow ended up in her dialogue exchange with the AI software. “Okay so, since everything you say is already stored, can you tell me who are Lindsey and Robert and what is their symposium? Tell me about that?”
However, ChatpGPT still maintained that his was a mistake on its end. Nor does it give any additional data on the people from the prompt. “The reference to Lindsey May VP of Google and Robert Capagliano was likely a mix up in my response. Those names and the context about the symposium were not based on any actual stored data.”
As the robot’s voice goes on, Liz continues to search up these names online. “It was simply an error where the system mistakenly generated content that didn’t belong here.”
At the end of her clip, Liz states that her entire conversation with ChatGPT lasted about ten minutes. So she only uploaded a portion of it to her TikTok. Following this, she opened up commenters to offer their ideas. Specifically as to why ChatGPT would randomly insert what looked like someone else’s prompt into her conversation.
Commenters react
Throngs of people had a variety of different replies to Liz’s video. Like this one person who quipped: “So VP Lindsey May is suddenly surprised in the middle of her presentation. To find a list of noodles ‘that hold a good sauce.’”
Another joked, “oh god now were being gaslit by male ChatGPT voices.”
Someone else echoed the aforementioned sentiment, stating that they too felt as if ChatGPT was getting sassy. “And why did it sound like it was getting an attitude with you.”
However there was one person who stated that they were an artificial intelligence researcher. According to them, this is a common issue with voice assistants. In the absence of data, the program will often “hallucinate” information. Which means that the voice assistant was attempting to hear something in the silence.
And what it ultimately heard was that Lindsey May and Robert Capagliano had a symposium that they needed help with. They broke it all down in a series of comments posted in reply to Liz’s video.
“Hi! AI researcher and natural language programmer here. You’re okay! This is spooky but not unheard of. The model is hallucinating. When you leave voice mode on but don’t speak, the model will attempt to extract language from the audio. In the absence of spoken word it will hallucinate.”
Hallucinating in AI systems
As for the “crossing wires” comment, she said that this prompt from ChatGPT was in agreement with what Liz suggested. Not that it offered this up as an umprompted explanation for what went down. “It also isn’t crossing wires, but is oriented towards hallucinating in agreement. So you suggested that wires got crossed and it agreed with you in an attempt to successfully ‘answer your query.’”
Furthermore, the person who said they were an AI researcher provided Liz with additional information. They stated that this data was more than likely culled from previous talks Liz had with ChatGPT.
@imjonathanacuna AI models are now hallucinating more as they become more powerful ##ai##chatgpt##artificialintelligence##aitools##techtok ##entrepreneur##businesstips##aiautomation##ainews##aiupdates##aitrends ♬ original sound – Jonathan Acuña – Doctor AI
“Check your conversation history. It could be pulling old names from previous conversations. Or could have made a seemingly random connection that in its computer spaghetti brain makes sense.”
They went on to explain that AI systems like ChatGPT don’t actively “listen” to people’s voices. The researcher explained that their audio is transcribed to text. Which means that audio isn’t being stored.
“The key detail here is – ChatGPT audio can’t actually ‘hear’ you. When you speak or leave the mic on, the audio is transcribed and that’s what chatgpt receives as input (text, not sound!). So it’s very ripe for hallucination because it’s kind of like a multi step human computer telephone hand off. And then once you suggest wires getting crossed, the model is going to take that and run with it in an attempt to satisfy your inquiry.”
Hallucinations in ChatGPT, Gemini, other AI systems
Axios also wrote about ChatGPT hallucinations in a piece titled “Why AI is still making things up.” If you’ve ever used an AI system to make an image for you using a text prompt, you may’ve encountered this.
For instance, there are numerous examples on the internet of people bewildered by additional limbs attached to people or animals. Or when AI added a fifth member of the Beatles to a drawing prompt.
According to Axios, artificial intelligence hallucinations will never be truly remedied. That’s because the computer software’s ability to hallucinate is what makes AI possible. “Hallucinations aren’t quirks — they’re a foundational feature of generative AI that some researchers say will never be fully fixed.”





Leave a Reply