Brit - How AI has changed the threat landscape in 2023

Today, AI seems to make the news headlines on an almost daily basis and the pace of change in AI capability has been one of the main talking points.

AI platforms have come to the mass market much quicker than anticipated

While AI is not a new concept (the phrase was initially coined in the mid-1950s), the rate at which we've gone from rudimentary platforms to advanced apps and specialised tools has been beyond many predictions. The primary headline maker was Open AI's ChatGPT tool. The company was originally founded in 2015, with the first version of its language model coming to the market in 2018. A partnership with Microsoft in 2019 moved things forward to the end of November 2022. When ChatGPT 3.5 was launched with 175 billion parameters, it offered users solutions that were thought to be years away. In March 2023, Open AI took things a step further with the updated ChatGPT 4, which is estimated to have an almost mind-boggling 170 TRILLION parameters, making it almost 1,000 times larger in scope than its predecessor.

AI Paralax Bard

Open AI doesn’t have a monopoly on AI tools, as Google have launched their similar Bard product, too. Another headline maker has been the image generation tool, Midjourney. Some comparisons between the tool’s ability to create art and photorealistic images between V5 and V4 show an incredible increase in capability in a very short time. V6 is expected to be just around the corner and will likely represent another giant leap forward.

AI Paralax GPT

The rate of change can be further demonstrated with the prediction that the AI market is estimated to be worth $407 billion by 2027, up an astonishing 368% from $86.8 billion in 2022. This acceleration can be attributed to the growth in capability of the likes of Midjourney, Bard and ChatGPT as well as other tools like Stable Diffusion and DALL-E, all of which released version updates within 2022.

The surge in AI platforms has been met with trepidation

Looking at the different tools that have come to the market and what they can be used for, you might feel that we’re in the midst of a technological revolution. With this incredible leap forward comes trepidation. In the world of academia, concerns have been raised about students using language models to write essays and dissertations.

Already, as many as 43% of students have been surveyed as using AI tools to help them with their academic work. The issue of plagiarism and non-original work is a problem at various levels of education.

 

As well as the issues around potential plagiarism, tools like ChatGPT and the newly launched Google Bard might not qualify or verify the information it uses as a source. These tools might subsequently produce content from a non-credible source and present it as fact. If you consider tools like Midjourney’s ability to create imagery, we can see an issue around false news stories. In March, the BBC had to fact-check several images circulating online that reportedly showed Donald Trump’s arrest – all were found to have been created with AI. This shows the potential to create fake news as it’s becoming more difficult to distinguish between real and fake images.

 

In the case of image generation tools, Stability AI, the company that created Stable Diffusion, has been at the centre of a class-action lawsuit brought by Getty Images. Getty has accused Stable Diffusion of the “ingestion of [Getty’s] copyrighted images to train the data [that allows it to create images]”, most importantly, without any form of remuneration or payment to Getty and their partners.


In July, a wide-ranging class action suit was filed against OpenAI and Microsoft, claiming their generative AI programs like ChatGPT and DALL-E are trained on “stolen private information” taken from what it described as hundreds of millions of internet users, including children, without proper permission. We can see how disruptive publicly available AI creation tools can be to market-leading businesses, especially with the added context of data laws that are decades old and perhaps not equipped to deal with the modern AI landscape.

 

The impact on education and image creation is just the tip of the iceberg. Prominent figures in tech like Elon Musk and the AI expert, Geoffrey Hinton, have added to a chorus of voices calling for moderation due to unknown implications. The Italian government even went so far as to temporarily ban Chat GPT amid concerns about the effects of AI falling into the wrong hands. Established tech giants like Facebook have shut down their own AI programs as they started communicating in their own language, raising control concerns from Facebook’s engineers.

What does this mean for cyber security?

Looking at the different tools that have come to the market along with the spread and rate of adoption, there are reasonable concerns about the capabilities of AI platforms from a cyber security perspective. The threat landscape is in a state of near-constant evolution, but some of the most significant areas of vulnerability that are worth further examination include the following:

 

More sophisticated phishing and whaling scams

Phishing scams have been around and well-understood for some years. They will continue to be a cyber risk as AI technology can be used to make them more sophisticated. While some email phishing scams are easy to spot due to poor spelling and grammar, AI software is readily available to generate convincing text that can fool recipients.

Another area identified as a potential vulnerability has been through so-called “whaling” scams – these are similar to traditional phishing scams but seek to target high-level decision-makers within specific organisations. The ability of AI to make highly-customisable lures means that company CEOs and senior stakeholders can be targeted with sophisticated email formats that utilise social engineering and allow hackers to exploit points of vulnerability.

 

The threat from vishing and Deep Fakes

While phishing emails can fool CEOs with emails, a more pertinent threat comes in the form of so-called vishing and Deep Fakes. This technology has existed for a couple of years and utilises AI to change people’s faces and voices. Deep Fakes initially made the headlines in the entertainment industry, but they have also been used to exploit cyber security vulnerabilities.

One recent occurred when fraudsters used real-time voice cloning to pose as a Dubai-based business director. They fooled a Hong Kong-based bank manager into transferring $35 million into the criminal’s bank account. The real-time ability of AI technology to change voices and faces means that people could receive phone calls or even video calls from criminals pretending to be people they know and trust. Criminals subsequently exploit that trust to steal money and information.

 

Utilising tools like ChatGPT to write clandestine code

Although ChatGPT has been making headlines for a few months, it’s already been found to be capable of creating code for polymorphic malware. Polymorphic malware differs from regular malware as it mutates and alters after replications and utilises encrypted components making it harder for cybersecurity software to spot. The polymorphic nature of this malware means it can continually evolve, making it highly evasive and difficult to detect. CyberArk, as quoted in the HackRead article linked above, are clear about the threat:

The use of ChatGPT’s API within malware can present significant challenges for security professionals.

ChatGPT’s ability to write code also lowers the barrier of entry for hackers as they technically don’t need to be trained software engineers to be able to create clandestine code. This potentially opens hacking up to less skilled people who might have otherwise lacked the necessary expertise to create code for a viable cyber security threat.

 

What Brit are doing to monitor the challenges

While some of these new threats might raise concerns, it’s important to remember that cyber security risks are ever-changing and date from well before the adoption of AI. We have covered risks across a number of areas, such as Privileged Access Management and the threat to critical infrastructure, as well as quantum computing and API.

Brit are continually monitoring the advent of new technologies to ensure we’re prepared to underwrite the insurance risks for cyber. Ben Maidment, the Head of Global Cyber Privacy and Technology acknowledges the importance of staying on the pulse of AI in the context of cyber security;

“The recent rapid acceleration of AI capability will only lead to increased adoption across all manner of businesses, and with this comes opportunities and threats. The corresponding interest from the public will also lead to further use (and abuse) of AI.  We’re already starting to see positive and negative use cases from a cyber point of view in the wild.  This is something we will continue to track and adapt our underwriting approach for, accordingly.”

Ben Maidment Head of Global Cyber Privacy and Technology

If you’re concerned about the threat posed by AI systems, speak to us

As we have demonstrated, the adoption of AI has surpassed many expectations and brought a new type of threat to cybersecurity. Threats, once considered hypothetical, now exist and present a tangible risk. Brit are a market leader in dealing with cybersecurity threats as they develop. We’ve been able to understand and write the risk presented by a variety of threats over the years. AI is no different, and if you have any concerns, please get in touch with our cyber team today!