What does this mean for cyber security?
Looking at the different tools that have come to the market along with the spread and rate of adoption, there are reasonable concerns about the capabilities of AI platforms from a cyber security perspective. The threat landscape is in a state of near-constant evolution, but some of the most significant areas of vulnerability that are worth further examination include the following:
More sophisticated phishing and whaling scams
Phishing scams have been around and well-understood for some years. They will continue to be a cyber risk as AI technology can be used to make them more sophisticated. While some email phishing scams are easy to spot due to poor spelling and grammar, AI software is readily available to generate convincing text that can fool recipients.
Another area identified as a potential vulnerability has been through so-called “whaling” scams – these are similar to traditional phishing scams but seek to target high-level decision-makers within specific organisations. The ability of AI to make highly-customisable lures means that company CEOs and senior stakeholders can be targeted with sophisticated email formats that utilise social engineering and allow hackers to exploit points of vulnerability.
The threat from vishing and Deep Fakes
While phishing emails can fool CEOs with emails, a more pertinent threat comes in the form of so-called vishing and Deep Fakes. This technology has existed for a couple of years and utilises AI to change people’s faces and voices. Deep Fakes initially made the headlines in the entertainment industry, but they have also been used to exploit cyber security vulnerabilities.
One recent occurred when fraudsters used real-time voice cloning to pose as a Dubai-based business director. They fooled a Hong Kong-based bank manager into transferring $35 million into the criminal’s bank account. The real-time ability of AI technology to change voices and faces means that people could receive phone calls or even video calls from criminals pretending to be people they know and trust. Criminals subsequently exploit that trust to steal money and information.
Utilising tools like ChatGPT to write clandestine code
Although ChatGPT has been making headlines for a few months, it’s already been found to be capable of creating code for polymorphic malware. Polymorphic malware differs from regular malware as it mutates and alters after replications and utilises encrypted components making it harder for cybersecurity software to spot. The polymorphic nature of this malware means it can continually evolve, making it highly evasive and difficult to detect. CyberArk, as quoted in the HackRead article linked above, are clear about the threat:
The use of ChatGPT’s API within malware can present significant challenges for security professionals.
ChatGPT’s ability to write code also lowers the barrier of entry for hackers as they technically don’t need to be trained software engineers to be able to create clandestine code. This potentially opens hacking up to less skilled people who might have otherwise lacked the necessary expertise to create code for a viable cyber security threat.
What Brit are doing to monitor the challenges
While some of these new threats might raise concerns, it’s important to remember that cyber security risks are ever-changing and date from well before the adoption of AI. We have covered risks across a number of areas, such as Privileged Access Management and the threat to critical infrastructure, as well as quantum computing and API.
Brit are continually monitoring the advent of new technologies to ensure we’re prepared to underwrite the insurance risks for cyber. Ben Maidment, the Head of Global Cyber Privacy and Technology acknowledges the importance of staying on the pulse of AI in the context of cyber security;