-
300,000 users have experienced a data breach on OmniGPT, an AI chat service outlet, where their private conversations were leaked.
-
The data leak was due to a lack of locked authentication systems by the chat service.
-
This incident reflects a breach of trust in upcoming AI platforms and the broader threats of cybercrimes in the digital world.

A large amount of confidential personal conversation data from users of the OmniGPT application has been exposed online without any form of protection. OmniGPT was created to provide users with an opportunity to connect with various models of artificial intelligence via conversation (chat-style apps).
A security researcher discovered a database containing more than 30 million conversations between users and AI software and found that the company advertised it as an “open database” with no password protection. This means that anyone with an Internet connection could access and browse through the database without having to authenticate themselves in any manner.
Up until today, anyone who had the ability to find the open database via the search engines could have accessed the names, e-mail addresses, and passwords of approximately three hundred thousand users of OmniGPT. The leaked conversation data are very sensitive as they consisted of very private conversations; therefore, experts in the area of Internet security have labeled this incident a “catastrophic” failure.
In addition, this event highlights the growing worry of individual privacy that comes with using these new AI-based services. Therefore, using this type of service poses potential risks because users share large amounts of private information, such as conversations.
How a Simple Mistake Caused a Major Breach
The leak happened because of a basic security error. OmniGPT, like many online services, uses a database to store user information. Organizations must secure this database behind strong authentication systems. In this case, it was not.
The researcher who initially exposed the database stated that it did not have any known form of protection and was publicly available. All users had access to the database because it did not have a firewall or any type of user access password to prevent unauthorized use.
A well-known cloud provider hosted the database, but the configurations were wrong, which is a common cause of large-scale data leaks throughout the technology industry due to simple human errors, such as failing to create, install, or enable security features on an application/database.
Usually. experts find these types of open database locations on the internet by performing a search through the use of Internet scanning tools, and the researcher was able to find the location of the OmniGPT database very easily.
He then contacted the OmniGPT Company to let them know that he had a copy of the open data archive and emailed them a link to the archive.
Unfortunately, the report stated that the data remained exposed for five to seven days before OmniGPT secured it against unauthorized access. Thus, it is still unknown if other individuals were able to access this sensitive data during the period of exposure.
What Sensitive Information Was Exposed?
The amount of potential private data that leaked is alarming because many times AI conversational interactions are highly personal, and users often think of these AI Chatbots as private and confidential (such as brainstorming, drafting private documents, and obtaining private advice).
The database contained several critical types of information:
- Full conversation histories exposed every message exchanged between users and AI models.
- User account details included email addresses and hashed passwords, which attackers can sometimes reverse.
- Query metadata included information about when chats occurred and which AI model users accessed.
This combination is dangerous. Although leaking an email is a problem, linking the email to different personal thoughts and questions creates another substantial breach of privacy.
Therefore, people with bad intentions could take this information and create specific phishing attempts, blackmail, or profile someone very well.
This event demonstrates another warning from most computer security advisories, including CISA (Cybersecurity and Infrastructure Security Agency), which states that misconfigured cloud services are one of the top security risks for organizations.
The Broader Impact and User Advice
This leak is a major blow to trust in emerging AI platforms. Users share deeply personal information with these tools, from health concerns to business ideas, expecting confidentiality.
OmniGPT has not provided any public statement regarding the breach or the specific process through which it sent notice to the affected 300,000 users.
Security experts are advising customers as follows:
Change your OmniGPT password immediately. If you utilize the same password on other sites, modify those sites also. Constantly be alert about receiving emails pretending to be OmniGPT or inquiring about the AI discussions that are fraudulent and want you to share additional personal information with them. Be sure you are checking all your accounts routinely to identify any questionable behavior.
Users unaffected by this equipment failure can use this as an opportunity to remind themselves about how careful they must be when giving any type of personal information to an AI service. It’s important to check whether a new platform has a good reputation for both security and privacy before using it.
The US FTC has resources available for obtaining information on how to protect your personal information on the internet and on what to do after a data breach.
As Artificial Intelligence (AI) continues to be an ever-growing part of our daily lives, this incident illustrates that the companies developing these powerful AI tools should prioritize the security and privacy of their users above all else.
A single configuration mistake should not be able to expose the inner thoughts of hundreds of thousands of people to the world. Yet, as the relentless pace of major breaches shows, these mistakes are becoming a feature of the digital ecosystem, exploited by threat actors whose tactics are constantly evolving