Legislating for AI – The European AI Act

Legislating for AI – The European AI Act

14 March 2024 | John Small

The 13th March 2024 marked a significant milestone in the move toward legislation aimed at managing the freight train that is AI development. During the 13th March sitting, the European parliament approved legislation which seeks to increase scrutiny and control on the uses of AI technology.

Those familiar with financial regulation will be no stranger to a risk-based approach, and the same kind of principle applies here. AI applications will be categorised according to the level of risk they present, and the risk category will then be used to set the level of scrutiny to which the system is subject. Broadly, the risk categories have been set as follows:

·       Banned – Applications which pose a clear risk to fundamental rights would be banned from use. Examples could include the processing of biometric data such as in the use of AI based facial recognition technology (law enforcement and military uses will have some exemptions).

·       High Risk – This includes AI based technologies which are used in critical infrastructure or in settings like healthcare, education and banking. These applications would be legal but highly regulated. Systems such as these will require careful implementation, risk assessment and human monitoring under the law.

·       Lower Risk – Administrative support systems such as spam filters would have much less regulation and face much lower levels of scrutiny.

Generative AI platforms like ChatGPT would also have some regulatory standards which they would need to meet. These standards focus mostly on providing transparency on the data used to train the models and ensuring that they don’t infringe on copyright. When deploying generative AI based systems, companies will need to be comfortable that the AI models used will be demonstrably compliant with this element of the legislation. The law is not yet enacted and there are further stages before it is enshrined as an operating EU law. However, this process is expected to be completed in the very near future and companies will need to be ready to react.

While the EU legislation is not necessarily directly applicable (unless you or your business has a touchpoint with an EU member state), the fact that it is arguably ‘world first’ means it is likely to set a standard. There is no current plan for equivalent legislation in the UK and the form any regulation will take is currently under consideration. Indications are that AI regulation in the UK is likely to form part of existing regulatory bodies’ remits rather than falling under a single dedicated body.  The absence of UK based legislation will mean that companies will inevitably look to the standards being met in Europe and the assurances that it provides. AI platforms capable of demonstrating they are compliant will present a lower risk profile and therefore be more attractive to potential customers than those that can’t. This presents a compelling case for AI companies to meet the standards even if they don’t fall under the jurisdiction of the EU law.

In turn, the legislation will also create a standard by which risk can be judged as part of a company’s overall risk profile. Given the imminent enactment of this legislation, it is important that companies and their boards consider their risk of exposure to AI based systems and how they are being used. Even if companies don’t have any AI systems in action now, they should consider how they will deal with AI as it creeps into mainstream products as is already the case with search engines and Microsoft’s Co-pilot for example. Many companies will have been discussing the risks posed by emerging technology for a considerable period of time, but the introduction of this legislation should act as an alert to those that haven’t.

Finally, it is important to balance the risk against the reward of innovation. We have seen first-hand what the well-judged application of technology can do to boost efficiency and productivity innovation shouldn’t suffer because of regulation or legislation. It is important to note that the EU are at pains to say their legislation seeks to avoid this. By having an awareness of AI technology, its functionality, purpose and practical use, it should be possible for companies to engage in innovation involving AI while still remaining well within risk and regulatory tolerance.

If you would like to discuss how our team can help you with the selection, risk assessment and implementation of technology solutions, please get in touch.


Contact Us


5 Anley Street, St Helier, Jersey, Channel Islands, JE2 3QE