With launches of services like Microsoft’s CoPilot and Chat GTP, generative Artificial Intelligence (AI) has been a hot topic over the first few months of 2023. Whilst AI is not new, these developments have been appearing in the business community at lightning speed. Like any technological advancements, the role of generative AI presents the financial and technological world with many questions, particularly for funds in relation to regulatory obligations and ethics.
Chris Steele, CTO at AlternitOne explores the developments of AI and how firms can navigate this complex and uncertain landscape.
We have arrived at a tipping point for generative AI. Thierry Breton, the EU’s Commissioner for Internal Markets made a statement at the end of Q1 2023 stating that whilst ‘AI has been around for decades, it has now reached new capacities that have been fuelled by computing power’. The advancements of generative AI have transpired so exponentially that legislation and regulators have been unable to formalise a framework for businesses - to understand how to use the technology securely and effectively. The way we approach the coming months will set the foundations for how to utilise AI technology in the future. It is critical that we build a healthy and resilient approach, with clear appreciate of risk. As with any ‘new’ technology, so that both businesses and the people within them can maximise the benefits of AI, but in a measured and understandable way.
When designing technology and how we use it, it is important to consider context and intention. Key questions might be:
How might AI help or harm our people or our business?
What are we aiming to achieve by using the technology?
Are we clear on what the regulators current guidance is?
These questions have been leading conversations for many regulators. At the end of March, the World Economic Forum published an article detailing that the EU is considering far-reaching legislation on artificial intelligence (AI) to be published later this year. The legislation is known as the Artificial Intelligence (AI) Act and will be focusing on addressing concerns surrounding the regulation and cybersecurity of AI, which in turn impacts the quality, accountability, human oversight and transparency surrounding data. The EU is set to establish an European Artificial Intelligence Board which will be responsible for the implementation of the regulation and ensure uniform application across the EU. The FCA’s latest business plan for 2023/24 also makes clear the regulator is exploring AI and its impact on UK firms. It is key that priorities for regulation surrounding AI foster innovation whilst also ensuring investors are protected. Whilst it seems there are positive steps for the developments of the technology itself, it can leave firms in murky waters with little stability with how to move forward in a manner that is ethical and while also navigating governance and risk policies and procedures.
The current AI landscape feels very much like ESG earlier in the 2000’s. When the concept and practice of ESG strategy began to really proliferate, there was no global set standard or framework for measuring the credibility of data or how funds truly performed against criteria. With different firms taking different approaches, it was impossible to objectively compare funds. This presented both investors and funds with multiple regulatory challenges. These challenges are lessening, and frameworks are becoming more accessible and understandable. This is the same path that AI will follow, as regulatory bodies begin to tackle the complexities of the advancements and create frameworks that businesses can follow to ensure the safe use of AI.
Comments