Young businesswoman standing against contemporary financial skyscrapers in the downtown financial district

If AI investigates crime, who sets the rules?

AI is revolutionizing investigative work in the financial space. It, however, comes with legal questions and new challenges.


In brief

  • AI uses advanced techniques like natural language processing and machine learning to perform investigative and redactional tasks quickly and efficiently.
  • As legislation catches up with technology, it is important to remain compliant with all relevant regulations when using AI.
  • Human oversight and guidance will remain essential to overcome challenges, build trust and use AI in an ethical way.

From efficient document analysis to EU regulation – artificial intelligence (AI) is changing the rules of play in economic criminal law. In February 2024, the member states of the European Union (EU) unanimously approved the groundbreaking draft regulation of AI. This marks a significant step in the development of AI regulation and has far-reaching implications for the AI landscape in Europe and beyond. In this article, we focus on how AI is used in investigations, consider the legal implications and explore the challenges.

How AI is used in forensic investigations

When there is suspicion of misconduct or a crime, it often happens that confidential documents need to be analyzed. In particular, in a forensic investigation task include collecting, analyzing and evaluating communication such as email traffic. Often, large numbers of messages need to be not only read, but also put into context, understood and then classified as relevant or not relevant. Until now, this work has involved a heavy human workload, with multiple reviewers spending many hours on the task. Furthermore, data confidentiality considerations – due to trade secrets or privacy rights – will often dictate that documents have to be redacted for cross-border processing or submission to an authority. This work also requires trained personnel and takes time.

AI is highly suited to making such tasks more efficient. It is particularly capable of understanding concepts and classifying the messages holistically, completing processes much faster than a human review team. But that’s not all AI can do. In the investigation of economic offenses, AI has three main areas of application:

Legal boundaries of AI

AI can only within legal constraints. The EU AI Act, a significant regulatory step by the EU Parliament and Council, was provisionally agreed upon in on 9 December 2023, and unanimously approved by EU member states on 2 February 2024. It marks a pivotal moment for AI regulation in Europe and will potentially influence other.

 

In Switzerland, the Federal Council announced on 22 November 2023 that it had commissioned the Federal Department of the Environment, Transport, Energy and Communication (DETEC), together with all relevant federal offices, to identify possible approaches to the regulation of AI by the end of 2024. This came after the Federal Administration had already set guidelines for the handling of AI in November 2020.

 

The draft EU AI regulation was created amidst AI’s rapid development and its increasing integration into various aspects of our lives. It aims to leverage the opportunities of AI, while at the same time minimizing risks and abuse. The European Parliament deems it essential for a smoothly functioning internal market. Key points of the regulation include:

 

Definitions: The definition of AI was highly debated. The text focuses on systems processing data, recognizing patterns, and making decisions based on this information. The text ensures adaptability for emerging AI technologies.

 

Promotion of innovation: Innovation is strengthened through the introduction of a “regulatory sandbox”. This allows companies to develop and test innovative AI systems without immediately having to meet strict regulatory requirements. This significantly lowers initial development and testing costs and encourages both start-ups and established companies to invest in new AI technologies.

 

Risk-based approach: This was central to the regulation, focusing on increased supervision for AI in sensitive areas, such as biometrics, to uphold security and data protection, guided by risk categories to aid companies in risk assessment.

Risk categories according to the EU AI Act

The risk scale makes it clear that consumer protection is at the heart of the regulation. Principles and obligations are established for all AI systems to ensure fairness, accountability, and transparency. This is crucial in preserving the rights of citizens and ensuring that AI technologies take their expectations and concerns into account.

Heavy penalties for violations
Of global turnover or EUR 35m

Notably, the EU has set stringent penalties for breaches. In the event of a violation, sanctions in the form of fines of up to 35 million euros or 7% of a company’s global annual turnover apply. This serves to promote compliance. Moreover, Discussion on the regulation's implementation and monitoring, including supplier self-evaluation and the role of private standardization bodies, highlights the EU's commitment to effective, equitable AI governance. It will be interesting to observe how the EU effectively and fairly implements the regulation.

Overcoming practical challenges

Naturally, challenges and concerns exist when using AI in forensic investigations, on a factual level. Although AI undoubtedly increases efficiency, we must not lose sight of the fact that AI still relies on algorithms and data. The quality of this data and the accuracy of the algorithms are crucial to the accuracy of results. Misinterpretations could also lead to incorrect decisions, which could potentially have serious consequences. Therefore, it is advisable to ask the AI to explain itself to obtain verifiable answers.

Also, adherence to compliance rules is a complex task that requires human judgment and expertise. There is a risk that excessive automation could replace the finely tuned assessments of human experts and potentially overlook subtle but important aspects, or even cause the system to judge with bias.

A quote from Albert Einstein highlights these challenges: “The definition of insanity is doing the same thing over and over again and expecting different results.” In this regard, we should be careful not to fall into the trap of putting too much trust in AI, but rather use AI as a complement to human expertise.

Summary

Integrating AI into forensic investigations and the detection of economic crimes undoubtedly offer immense benefits. However, we must tread carefully in its application to ensure that it operates with the required precision and in accordance with the law. 

Acknowledgement

We kindly thank Adrian Ott and Iman Gonzalez Prada for his valuable contribution to this article.

About this article