On the 12 May this year I addressed around 600 lawyers at the annual dinner of ARDL, the Association of Regulatory and Disciplinary Lawyers at The Guildhall, London where I spoke about the role of equality, diversity and inclusion in Artificial Intelligence. I spoke about the potential of AI, its influence and impact on society and how businesses operate, the power of generative AI to transform the legal industry to automate and enhance various aspects of legal work, such as contract analysis, due diligence, litigation processes and regulatory compliance. I also spoke of the risk of AI, such as the loss of control of our data, digital tools that harvest data from user experience to sensitive financial data.
It is predicted that by 2030, AI will be responsible for 5% of the UK economy. The World Economic Forum estimates that 83 million jobs will be lost to AI and that 69 million will be created in the next five years, but a common concern that cannot be dismissed is the growing concern of the lack of diverse data being fed to AI technology, the lack of diversity among the people developing it and the unintended biases this technology may produce, particularly in decision making and the effect this may have on our economy and society as a whole. Potentially leading to economic dislocation and civil unrest.
The technology sector is one of the least diverse sectors there is in the UK, only 26% are female and 15.2% are from an ethnic minority background according to Tech Nation and whilst we have seen progress on diversity in recent years in the sector, further steps needs to be taken as we race forward with technology. And you may be wondering why diversity in tech is so important. It is important because of the decisions these technologies make can have real life implications and long lasting consequences, as well as moral and ethical reasons, there is a hard business case for regulating this industry quickly and effectively to ensure the data used provides diverse and equitable outcomes.
As we all know, as this powerful technology requires human input, programmers, developers, consideration must be given to diversity and inclusion when building these technologies or we run the risk of flawed AI tools such as algorithms that perpetuate biases. Algorithms tend to reflect human biases, it is human beings who choose the data that algorithms use, and also decide how the results of those algorithms will be applied. If diversity is not sufficiently represented it cannot be fully replicated.
With this in mind I found the government’s then position as set out in its AI White Paper bizarre, preferring to not legislate, instead stating it “will avoid heavy-handed legislation which could stifle innovation and take an adaptable approach to regulating AI.” The government indicated it will do this by empowering existing regulators to prepare tailored, context-specific approaches that suit how AI is used in each specific sector. This approach I feel will create risks, unintentional exclusion and higher error rates to go unchallenged.
Most of our current laws were not written with AI in mind, creating real life issues to go unaddressed. We now have technological systems that can converse with us as if they were other people, the ability of ChatGPT to recreate the most human of traits and whilst there are plenty of laws that regulate how we behave as humans there are little or no laws that governs AI and whilst I appreciate the need to not hamper innovation or growth, however leaving it to regulators alone to solely rein in AI may do more harm than good, leaving harmful, but legal AI in place with no effective framework for the enforcement of legal rights and duties. Clearly our European colleagues in the European Parliament think differently as they have set out a legal framework on AI with the Artificial Intelligence Act seeking to ensure safety and fundamental rights.
Effective regulation to address how AI is used and developed requires the full weight of government. Whilst regulators can influence behaviour through education, training and the setting of standards, government legislating in this space can address bias and discrimination before they arise by ensuring AI technology is developed responsibly, safely and transparently and that there is accountability, which will positively impact global perceptions of AI as well as boost its legitimacy and prominence across all business sectors and jurisdictions. Google, Amazon, Microsoft, Meta and Open AI have agreed to watermarking, alerting the public when an image, video or text is created by artificial intelligence. Whilst in its infancy stage, it’s a welcomed start.