
There has been a lot of talk about ChatGPT since it burst onto the market several months ago. And despite its infancy and the lack of standardized regulations around intelligent automation — the OpenAI tool has exploded into the tech ecosystems of businesses everywhere.
While many see significant benefits from its use, few discuss the cyber risk to the industry and our organizations.
The Dark Side of ChatGPT Usage
Many consumers and organizations are using ChatGPT for customer-facing chat services, fact-checking and blog writing (but not this blog!). However, there are also many using it for nefarious purposes.
Phishing
Hackers can use ChatGPT to develop hyper-realistic phishing emails since it corrects grammar and spelling errors, which are common phishing indicators. ChatGPT can also write malware code faster and more efficiently than man-made malware. Although ChatGPT does have a content policy that looks for malicious activities, these violations may not always stop bad actors.
Social Engineering
ChatGPT revolutionizes social engineering. By posing the right questions, hackers could obtain critical information about an organization and its employee base. Using this information to manipulate employees into granting access increases the risk of compromise. In short, ChatGPT threatens our ability to keep our organizations safe and increases the likelihood of an attack or breach.
Backdoors
The impact and likelihood of attack grow exponentially when ChatGPT is used internally at organizations.
For example, if the AI checks code for vulnerabilities or drafts security policies, that could improve the organization’s security posture. However, introducing a third-party application into your code base increases the likelihood of new vulnerabilities, backdoors or data exfiltration. Using ChatGPT to respond to customer chat may reduce wait times. However, it also grants a third-party application access to customer data.
It’s essential to consider what outcomes the tool will bring to the organization and how those outcomes affect operations to determine if the risk is worth the reward.
“Free” ChatGPT Doesn’t Mean “Risk-Free” AI
Unfortunately, the lack of industry standards and agreed-upon practices for third-party risk leaves many companies scrambling. Some subscribe to the idea that “something is better than nothing.” But often, that “something” builds a false sense of security.
Many companies conduct third-party risk assessments as part of the procurement process, meaning they only look at their providers once annually (at best) and usually just the paid services. This fragmented definition of third-party risk misses a broad swath of the business surface area, leaving it open to breach.
Companies considering the utilization of ChatGPT must ensure the tool and the provider undergo the same third-party risk management process as any other application. Although OpenAI is an established organization with many years of experience promoting and developing AI systems, the relative immaturity of the ChatGPT application, combined with the lack of security assurance available for OpenAI, can put organizations at risk.
Being proactive and applying a risk-first approach ensures AI’s strategic use doesn’t compromise organizational security.
Assess ChatGPT as a Third-Party Vendor
When assessing any third-party risk — including ChatGPT — security leaders must look holistically at the provider and consider the impact across the business.
Ask the Right Questions
Start by asking these questions:
What is the impact of this provider on the organization?
- What do they do for the organization?
- What do they have access to?
- What can they “touch”?
If an organization uses a tool that has direct access to customer data or source code, the impact of that provider is exceptionally high.
How likely is this provider to have an incident, breach or issue?
- Have they demonstrated their ability to keep their product and any held data safe?
If the provider can’t document sufficient risk reduction controls (via compliance reports or otherwise), there is a much higher likelihood of an incident.
Check the Vendor’s Security and Compliance Posture
OpenAI makes ChatGPT, and although they have documentation on utilizing their products safely, they hold no external compliance certifications or publish publicly available security policies.
There are many examples of hackers bypassing safety mechanisms and using the tool for nefarious activities. Just last week, OpenAI reported its first major breach they are equating to a bug in an open-source library. This resulted in chat titles and logs being exposed. It is also believed that the same bug resulted in partial credit card numbers and customer personal information being breached.
Cyber-attacks and breaches can result in lost profits, productivity and trust — so getting ahead of these problems will help establish greater confidence from the board to the C-suite and the customer.
Remove the Cybersecurity Blinders to ChatGPT
Overall, there are both short- and long-term benefits for businesses considering ChatGPT and other intelligent automation tools as part of their vendor ecosystem. But we must hold them to higher standards. Third-party relationships, of any kind, are a company’s biggest vulnerability, so not considering ChatGPT in those strict controls and assessments is increasing the likelihood of a breach.
Leaders should instead take their cybersecurity blinders off and start asking the right questions to boost their information security.
The bottom line: businesses are putting their faith in a tool and a manufacturer that has done nothing to earn it. In a statement on March 24, 2023, the company said, “we are confident that there is no ongoing risk to users’ data.” Are you?
RiskOptics can help you manage third-party risk like ChapGPT. Check out the ZenGRC and schedule a free live demo today!