top of page
  • Writer's pictureAlternit One

Was 2023 AI’s most disruptive year yet?

2023 has been a disruptive year for artificial intelligence (AI) - so much so that McKinsey&Co dubbed the year as “generative AI’s breakout moment”. Arriving towards the end of 2023, Alternit One (A1) reviews the year and explores key considerations for firms before embracing the technology in the future.

A breakout year for AI


AI’s year started with a bang. Following its initial launch at the end of 2022, Chat GPT reached 100 million users during the first quarter of 2023, UNESCO confirmed. Reuters published in August that ChatGPT “is expected to make $200 million in revenue by the end of 2023”. Microsoft disclosed that the company is making over “$80 million in revenue per month, compared to just $28 million in the entire year of 2022”. Simultaneously, Reuters reported that “OpenAI is on track to generate more than $1 billion in revenue over the next 12 months from the sale of artificial intelligence software and computing capacity that powers ”’. 


Optimising the potential of AI


Such rapid evolution within AI technology has led us to consider our policies surrounding the technology and how our clients will be able to keep their company data secure, whilst optimising the functionality that AI has to offer. We explore four key considerations for firms when it comes to embracing AI in 2024 and beyond.

  • AI Tool Selection

When thinking about the selection of AI tools, companies must give consideration to the ownership, management and control of data within the application itself. By keeping this in mind, companies can strategically plan for the safe and secure adoption of AI technology. In order to do this successfully, it is important to review AI technology as two primary types. These are internal AI and external AI. Internal AI are the tools that a company is managing within the organisation's IT stack. External AI tools are third party tools, owned and managed outside of the organisations own cloud environment. 

  • Data Protection

It goes without saying that all AI tools must be used in a way that complies with data protection laws and regulations outlined by the FCA, SEC and/or GDPR requirements. How companies approach their data protection strategy will involve revisiting the aforementioned internal AI and external AI categories. Each will require different approaches to protecting both personal data of employees and clients and company data, probably meaning new risk assessments and revisiting governance and risk policies.

  • Ethical Use

Where the output of an AI tool is providing significant levels of data, it is critical that it is being cross checked and approved before being used to influence any financial decisions. It is important to note that AI tools are still in their infancy and firms should proceed with caution when using them, only implementing the technology to fact check and/or provide context to research backed data.

  • Training and Awareness

All employees must receive training with how to use and implement AI tools within business operations so they know how to operate the technology in a manner that respects both the regulator and the company’s own AI policy. The regulatory framework for AI is still in its infancy stage and ongoing training and awareness will be key for staff as the technology develops. 


It has been a busy year for firms trying to follow developments in and 2024 promises to usher in new waves of innovation. A1 is here to help clients with the adoption of AI in their business. We assist our clients with the selection of AI tools available and provide training for teams, whilst also ensuring ethical use and adhering to data protection regulations. If you would like to learn more about how we can help you, contact us today.





23 views0 comments


bottom of page