A woman engaging in a conversation with a friendly orange chatbot using her tablet device.

Giving AI Chatbots Access To Sensitive Data, Including Passwords And PII

Advanced language model chatbots in the UK are receiving confidential data, such as source code and personally identifiable information (PII). The use of artificial intelligence (AI) applications in enterprises has increased dramatically in the previous two months, rising by a substantial 22.5%.

It’s important to understand the advantages and disadvantages of AI-powered chatbots as they become more popular. Cyberattacks and privacy issues are two intrinsic risks associated with using AI chatbots.

Remarkably, chatbots that pretend to be helpful, such as ChatGPT, Bard, Bing AI, and others, may unintentionally reveal personal information on the internet. The AI language models that these chatbots use to extract content from user data. Recently, OpenAI fixed an issue in ChatGPT that was displaying a user’s information to other users.

According to the research, a business organisation posts sensitive data to generative AI apps on average 179 times a month for every 10,000 employees.

Ray Canzanese, the director of threat research at Netskope Threat Labs, stated:

It is inevitable that some users will upload proprietary source code or text containing sensitive data to AI tools that promise to help with programming or writing. Therefore, it is imperative for organisations to place controls around AI to prevent sensitive data leaks.

Source code is the most often sent sensitive data on ChatGPT, according to Netskope, with 158 instances per 10,000 users each month. Secure information (PII, excluding source code) and passwords/keys often incorporated in source code are examples of other shared data. Regulated data includes healthcare and financial information.

Clever Attackers Take Advantage of ChatGPT’s Source Code Sharing

The primary data type exchanged with ChatGPT is source code. It makes sense that fraudsters, hackers, and other attackers would try to take advantage of the hype about ChatGPT and AI apps in general to get unauthorised access.

This is an attack strategy that attackers frequently use. According to the Spring 2023 Netskope Threat Labs Cloud and Threat Report, for example, attackers tried to take advantage of events like the war between Russia and Ukraine, the earthquakes in Turkey and Syria, and the failure of Silicon Valley Bank. Due to the platform’s large user base, possibility for profit, and diverse skill set, ChatGPT has become quite popular, which attracts the attention of fraudsters and attackers.

There are several ChatGPT proxies that Netskope Threat Labs is presently keeping an eye on. One benefit these websites appear to provide is free, unlimited access to the chatbot. It should be noted that this has the disadvantage of making the proxy operator aware of all your input prompts and supplied replies.

Operator Can See All Prompts and Responses through ChatGPT Proxy
Operator Can See All Prompts and Responses through ChatGPT Proxy (Netskope)

Many businesses have limited access to chatbots such as ChatGPT to tackle these cyber threats. In the financial services and healthcare sectors, both tightly regulated businesses, nearly one in every five firms has implemented a total ban on employee use of ChatGPT, compared to one in every twenty in the technology sector.

Terry Ray, SVP, Data Security GTM, and Field CTO of Imperva, specified:

Forbidding employees from using generative AI is futile. We’ve seen this with so many other technologies – people are inevitably able to find their way around such restrictions and so prohibitions just create an endless game of whack-a-mole for security teams, without keeping the enterprise meaningfully safer.

It is a multidimensional task to enable the safe deployment of AI apps in the company. It entails determining whether applications are legal. Furthermore, it is critical to design restrictions that allow users to fully utilise these apps while protecting the organisation from threats.

Be careful and never send AI chatbots sensitive information. They are not prepared to deal with such stuff, and mishandling it might have unanticipated consequences and violate privacy. Ignore them in delicate conversations and treat them like automatic systems. Block known harmful sites and URLs, examine all HTTP and HTTPS content, and thwart opportunistic attacks.

AI apps like ChatGPT pose a security risk to businesses, but they are manageable. While stopping the program lowers the danger somewhat, there are thankfully more sophisticated technical solutions available, such as Data Loss Prevention (DLP) and user training. These technologies let businesses take full benefit of AI applications while minimising the dangers involved.

Malicious actors can threaten significant financial and legal consequences, including money repayment and legal actions, if they are able to get sensitive information and use it for fraud or identity theft. Furthermore, sending out phishing emails with chatbots might cause long-term harm to one’s trustworthiness. Someone with malicious intent might use the stolen sensitive information to spread deception and manipulate public opinion, causing severe damage to companies or people and damaging their trustworthiness.

Phishing Tackle offers a free 14-day trial to help train your users to avoid these types of attacks and test their knowledge with simulated attacks using various attack vectors. By focusing on training your users to spot these types of attacks, rather than relying solely on technology you can ensure that your organisation is better prepared to defend against cyber threats and minimise the impact of any successful attacks. 

Recent posts