A human and a robot are working together on various tasks.

OpenAI’s Undisclosed Security Breach

OpenAI, the company behind ChatGPT, faced a security breach early last year. A hacker gained access to the company’s internal communications systems and obtained information about the design of its artificial intelligence (AI) technology.

The hacker received information via a private forum that OpenAI employees used. Employees were discussing OpenAI’s latest technology in this forum. However, the hacker did not gain access to the systems where the company develops and maintains its artificial intelligence.

In April 2023, OpenAI executives disclosed a security breach to employees during an all-hands meeting at the company’s San Francisco office. The board of directors was also informed. Two individuals who requested anonymity provided this information.

OpenAI revealed the breach to its workers and board but decided not to make it public. The company stated that no customer or partner data had been exposed.

The executives did not consider the incident as a national security issue, assuming that the hacker was an unknown individual with no links to any foreign government. As a result, the company did not contact the FBI or other law enforcement agencies.

However, this decision raised concerns, particularly as several senior employees, including chief scientist Ilya Sutskever, had recently left OpenAI over safety culture issues.

OpenAI’s choice of hiding the attack from the public did not mitigate the company’s security concerns. Following the event, numerous employees, including former technical programme manager Leopold Aschenbrenner, raised concern about potential threats to US national security.

Aschenbrenner sent a note to the board after the breach stating that the company’s security measures were insignificant to thwart possible attacks from foreign opponents.

Aschenbrenner claimed that OpenAI fired him for politically motivated reasons after he revealed further corporate secrets in the spring. However, OpenAI denied these allegations, claiming that the reasons for his resignation were unconnected and that his assessments of the company’s security were incorrect.

The disclosure of a previously undisclosed security failure, which the leadership of OpenAI purportedly believed they could handle without government supervision, may or may not help restore their damaged image regarding security.

It doesn’t appear like this week’s other security news is much better either. The macOS version of ChatGPT bypassed the Mac’s built-in sandboxing, exposing sensitive data, as found by software engineer Pedro José Pereira Vieito. An insecure directory included plain text records of user conversations. OpenAI has since reportedly fixed the issue.

OpenAI and the Rise of Open-Source AI: Balancing Security and Innovation

There are more companies developing strong systems with quickly progressing AI technologies besides OpenAI. Some companies, such as Facebook and Instagram’s parent company Meta, release their ideas as open-source software.

Many tech companies believe that existing artificial intelligence technologies offer minimal risks and that sharing code allows for industry-wide issue solutions.

Companies like OpenAI, Anthropic, and Google include precautions in their artificial intelligence applications before releasing them to the public, with the goal of preventing misuse and misrepresentation.

Researchers and industry executives frequently voiced the risk that artificial intelligence may play a role in developing new bioweapons or breaking into government computer networks. Some even worry that it may threaten humanity’s basic survival.

As a result, companies like Anthropic and OpenAI are currently enhancing their security protocols. To address any concerns related to emerging technology, OpenAI recently formed a Safety and Security Committee.

Paul Nakasone, a retired Army general who supervised the National Security Agency and Cyber Command and just joined the OpenAI board of directors, is a member of this committee.

Lawmakers from both the federal and state levels are pushing for regulations that would restrict companies from releasing certain artificial intelligence technology and impose severe penalties if they do harm.

Phishing Tackle provides training course videos that cover different types of artificial intelligence threats designed to counter the ever-changing risks posed by artificial intelligence. We offer a free 14-day trial to help train your users to help train your users in avoiding these types of attacks and to test their knowledge with simulated phishing attacks.

Recent posts