OpenAI To Monitor ChatGPT Conversations, Security Concerns Rise As User Data Could Reach Law Enforcement
OpenAI confirmed it will monitor ChatGPT chats to detect violence or harmful content. The decision, linked to recent safety concerns, has raised global debate about privacy, security, and free speech.

Tech News: OpenAI, the company that created ChatGPT, has made a big revelation in a blog post that monitoring of users' conversations through ChatGPT has been started. The company is monitoring users' chats so that if any user's chat shows any sign of violence or harm to any person, then these chats are immediately sent to the special review team. If the team feels that the threat is serious, then in this situation, without delay, the company can share information about that person's chat with the law enforcement agency.
This disclosure by the company has raised many big questions because till now people thought that their conversations with ChatGPT remain private and safe. OpenAI's special team can ban the account and contact the police if a serious threat is detected. This means that now your conversation with ChatGPT is no longer private.
Growing concern over AI Safety
This step has been taken due to growing concerns about AI safety. Recently, a case has come to light in which a person was talking to ChatGPT for a long time and then that person killed his mother and then committed suicide. 56-year-old Stein-Erik Solberg reportedly considered the chatbot his "best friend".
ChatGPT Fuels Dangerous Conspiracies
Screenshots of the man's conversations with ChatGPT showed that ChatGPT was confirming his conspiracy theories, including the belief that the man's elderly mother was trying to poison him. "Eric, you're not crazy, your instincts are sharp and your caution here is completely justified," the AI bot told the 56-year-old during a discussion about the attempted murder.