Organisations are beginning to implement artificial intelligence (AI) solutions at scale and the enterprise software they use is increasingly AI powered. The aim is to increase efficiency, productivity and creativity, but the technology brings significant additional cyber risks.
Businesses are already encountering AI related cyber problems with reports of staff disclosing sensitive commercial information and intellectual property to AI models not uncommon. This type of problem goes hand in hand with the technology, unfortunately. AI with its natural language user interface puts advanced analytics tools in the hands of non-tech staff across all functions in organisations.
At the same time, the amount of corporate data being given to chatbots by employees rose nearly fivefold in the 12 months to March 2024, according to a study of three million workers carried out by SC Magazine. This puts organisations at much higher risk of data leakage.
That risk is heightened by weak adherence to cybersecurity protocols. According to the EY 2023 Global Cybersecurity Leadership Insights Study, 64% of Chief Information Security Officers (CISOs) were not satisfied with their non-IT colleagues’ adoption of cybersecurity best practice. Indeed, this was cited as the third-biggest internal cybersecurity challenge while human error continued to be a major cyberattack vector.
Focus on technology, governance, operations imperative
While cybersecurity is of paramount importance, it must not be allowed to become a barrier to AI adoption. The cybersecurity function, therefore, needs to adopt new approaches to support the safe acceleration of adoption. The function also needs to nurture a cyber-secure workforce and will, therefore, need visibility into how AI tools are being used across the business. This will require a focus on the three key areas of technology, governance and operations.
In terms of technology, solutions that enable cyber teams to detect when certain AI services are being used, track data flow and automate compliance are becoming available. Other tools monitor data already in an organisation’s network for documents that are being uploaded or prompts used in a ChatGPT function.
On governance, policy should focus on threat modelling from the outset. This will allow organisations to identify and quantify risk and, therefore, inform the design of appropriate controls. Organisations should also define the procedures for ensuring data protection and privacy in the development of AI models. They must also be accountable for the outputs of the models in use.
Threat evaluation must be accompanied by continuous data verification, classification and tagging. Our research has found that some organisations have as little as 20% of their data tagged or classified. All businesses need to prioritise tagging and verification for their most critical and sensitive data to ensure that they have the right safeguards for issues such as identity, data flow and data access.