Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

OpenAI never disclosed that hackers cracked its internal messaging system

A concept image of a hacker at work in a dark room.
Microbiz Mag

A hacker managed to infiltrate OpenAI’s internal messaging system last year and abscond with details about the company’s AI design, according to a report from the New York Times on Thursday. The attack targeted an online forum where OpenAI employees discussed upcoming technologies and features for the popular chatbot, however, the systems where the actual GPT code and user data are stored were not impacted.

Recommended Videos

While the company disclosed that information to its employees and board members in April 2023, the company declined to notify either the public or the FBI about the breach, claiming that doing so was unnecessary because no user or partner data was stolen. OpenAI does not consider the attack to constitute a national security threat and believes the attacker was a single individual with no ties to foreign powers.

Per the NYT, former OpenAI employee Leopold Aschenbrenner previously raised concerns about the state of the company’s security apparatus and warned that its systems could be accessible to the intelligence services of adversaries like China. Aschenbrenner was summarily dismissed by the company, though OpenAI spokesperson Liz Bourgeois told the New York Times his termination was unrelated to the memo.

This is far from the first time that OpenAI has suffered such a security lapse. Since its debut in November 2022, ChatGPT has been repeatedly targeted by malicious actors, often resulting in data leaks.  In February of this year, user names and passwords were leaked in a separate hack. The previous March, OpenAI had to take ChatGPT offline entirely to fix a bug that revealed users’ payment information, including the first and last name, email address, payment address, credit card type, and the last four digits of their card number to other active users. Last December, security researchers discovered that they could entice ChatGPT to reveal snippets of its training data simply by instructing the system to endlessly repeat the word “poem.”

“ChatGPT is not secure. Period,” AI researcher Gary Marcus told The Street in January. “If you type something into a chatbot, it is probably safest to assume that (unless they guarantee otherwise), the chatbot company might train on those data; those data could leak to other users.” Since the attack, OpenAI has taken steps to beef up its security systems, including installing additional safety guardrails to prevent unauthorized access and misuse of the models, as well as establishing a Safety and Security Committee to address future issues.

Andrew Tarantola
Former Computing Writer
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
Microsoft considers developing AI models to better control Copilot features
The new Copilot 365 logo.

Microsoft may be on its way to developing AI models independent of its partnership with OpenAI. Over time, the generative AI company, OpenAI, has expanded its influence in the industry, meaning Microsoft has lost its exclusive standing with the brand. Several reports indicate Microsoft is looking to create its own “frontier AI models” so it doesn’t have to depend as much on third-party sources to power its services.

Microsoft and OpenAI have been in a notable partnership since 2021. However, January reports indicated the parties have had collaborative concerns over OpenAI's GPT-4, with Microsoft having said the model was too pricey and didn’t perform to consumer expectations. Meanwhile, OpenAI has been busy with several business ventures, having announced its $500 billion Stargate project, a collaborative effort with the U.S. government to construct AI data centers nationwide. The company also recently secured its latest investment round, led by SoftBank, raising $40 billion, and putting its current valuation at $300 billion, Windows Central noted.

Read more
OpenAI might start watermarking ChatGPT images — but only for free users
OpenAI press image

Everyone has been talking about ChatGPT's new image-generation feature lately, and it seems the excitement isn't over yet. As always, people have been poking around inside the company's apps and this time, they've found mentions of a watermark feature for generated images.

Spotted by X user Tibor Blaho, the line of code image_gen_watermark_for_free seems to suggest that the feature would only slap watermarks on images generated by free users -- giving them yet another incentive to upgrade to a paid subscription.

Read more
OpenAI adjusts AI roadmap for better GPT-5
OpenAI press image

OpenAI is reconfiguring its rollout plan for upcoming AI models. The company’s CEO, Sam Altman shared on social media on Friday that it will delay the launch of its GPT-5 large language model (LLM) in favor of some lighter reasoning models to release first.

The brand will now launch new o3 and o4-mini reasoning models in the coming weeks as an alternative to the GPT-5 launch fans were expecting. In this time, OpenAI will be smoothing out some issues in developing the LLM before a final rollout. The company hasn’t detailed a specific timeline, just indicating that GPT-5 should be available in the coming months.

Read more