Generative AI is booming. The explosion of AI-powered chatbots and large language models (LLMs) was ignited by the release of ChatGPT in late 2022. Having captured the world’s interest, many organisations and industries are now looking at ways to incorporate AI chatbots into their day-to-day business, seeking innovative ways to drive efficiency.
The limits of these AI chatbots are ever-expanding. Now powered by OpenAI’s latest framework, GPT-4, ChatGPT is claimed to exhibit ‘human level performance’ and can even pass the US bar exam. It’s therefore no wonder that some in the legal industry are embracing this potential. Allen & Overy, a magic circle law firm, and PwC have both recently introduced AI chatbots to speed up the work of their lawyers. Yet others, like Mishcon de Reya, are restricting the use of ChatGPT over data security fears.
With two clear sides of the generative AI debate emerging, can law firms really trust AI chatbots from a cyber risk perspective? And if not, what should law firms be doing instead to bolster their cybersecurity strategies?
What are the cyber risks of lawyers using AI chatbots?
The cyber risks of AI chatbots can arise from all types of users, both non-malicious and malicious. By the very nature of lawyers’ everyday work, one concern around using AI chatbots is the security and privacy of the sensitive information and data that users in-put into the system. All prompts and queries given to ChatGPT, for example, are stored and visible to the chatbot’s provider. These queries will also almost certainly be used to develop the platform’s service and/or model at some point in the future.
For a lawyer using an AI chatbot to draft a contract, for example, the sensitive information they include in their query is therefore stored in what could be an insecure platform. Queries stored online are at risk of being hacked, leaked or accidentally made publicly accessible. Consequently, while an AI chatbot can speed up processes like contract drafting, it comes with risk. It’s crucial that law firms take proactive measures to help employees keep cybersecurity front of mind – especially if they permit the use of AI chatbots – to limit the chances of sensitive information being included in a query and leaked.
Can AI chatbots aid cybercriminals?
There is no doubt that cybercriminals will already be finding ways to leverage ChatGPT to conduct malicious activity. One of the key concerns is that an AI chatbot or LLM may aid someone with malicious intent but insufficient skills to generate attacks way beyond their technical ability. Although in their current state AI chatbots are more suited to simple tasks and helping experts save time, this will change in the future. And considering the advancements already achieved with the release of GPT-4, this may be even sooner than previously thought.
The other concern around the malicious use of AI chatbots is that they will be used to write more convincing phishing emails. ChatGPT is underpinned by a language model that uses deep learning to produce human-like text. Users can even ask the AI chatbot to produce written content in the style of a particular author. The ability of malicious users to generate extremely convincing phishing emails – including emails in multiple languages – is concerning, particularly for the legal sector.
PwC’s 2022 Annual Law Firm survey found that the most frequent cyber incident experienced by firms are phishing attacks. Given that much of the work in the legal industry is conducted over email, the sector is highly vulnerable to this form of attack that continues to rely on human susceptibility. Aided by more compelling phishing emails, cybercriminals will no doubt continue to target law firms via this threat vector.
What law firms should do to bolster their cybersecurity strategies?
The good news is that there a several ways law firms can bolster their cybersecurity to mitigate against evolving cyber risks like AI chatbots. Providing cyber awareness training with phishing simulations can help to instil a ‘security first’ mindset across an entire firm. Conducted little and often, this helps to ensure the importance of cybersecurity is translated into the day-to-day operations of employees, which may now include the use of an AI-powered chatbot.
However, it’s important to remember that humans are fallible; firms need sufficient depth of defence to mitigate against a critical cyber incident in case they do fall victim to a phishing attack. Conducting regular cybersecurity audits to identify key areas of weakness helps law firms to go beyond what they already know about their cybersecurity, uncovering any unknown vulnerabilities. Collaborating with an experienced security partner to assess risk and support remediation efforts puts law firms in a stronger position to face growing threats – particularly smaller firms who may lack the in-house expertise to conduct these assessments internally – and ensure ROI on cyber investment.
As with any emerging technology, AI chatbots should be treated with caution; the warning from GCHQ that ChatGPT and other AI-powered chatbots are an emerging security threat should not be taken lightly. Yet if firms recognise this risk, educate their teams and assess their security posture with regular audits, they are far better positioned to protect their networks, data and clients.
Lawrence Perret-Hall
Director
CYFOR Secure