Man getting ready to climb up mountain

How can you make the AI of today ready for the regulation of tomorrow?

Balancing generative AI’s potential with its risk and regulatory complexities requires a flexible and principles-based approach.


In brief

  • For businesses navigating the complexities of generative AI and regulation, a principles-based approach offers a flexible way to manage risk and foster trust.
  • Embracing a trusted AI framework empowers businesses to harness the power of AI, benefiting stakeholders and shaping a more equitable and sustainable future.

For over a decade, companies have drawn on the capabilities of artificial intelligence (AI) in myriad narrow use cases—including customer service chatbots, financial fraud detection and personalised e-commerce recommendations. As AI systems continue to engage with employees, customers and the public across various sectors, it is essential that they embody the same ethics and values expected of the organisations and people for whom they work. Trust and positive customer experiences hinge on the successful implementation of AI ethics. 

In recent months, new generative AI technologies and foundational models have been making headlines, such as OpenAI's GPT-4.1 These innovations unlock a vast range of transformative use cases, creating opportunities for organisations whilst also presenting fresh challenges. Governments worldwide are taking notice, as illustrated by the European Parliament's draft of the EU Artificial Intelligence Act and the UK's pro-innovation white paper on AI regulation, both of which have taken these new foundational models into account.2,3

However, the growing imperative to regulate AI has spawned a multifaceted patchwork of approaches globally, complicating matters for businesses already grappling with new AI risks. Consequently, the realm of AI governance is still largely uncharted territory for corporations: boardrooms and C-suites have yet to formally consider and regulate AI ethics and values, leaving businesses vulnerable to reputational risks and potential regulatory penalties. Moreover, as generative AI technologies continue to evolve in new and unpredictable ways, many of the underlying assumptions made by these draft regulations and nascent corporate governance approaches are not always valid. For instance, traditional AI systems have been designed to perform specific tasks, which allows for the implementation of targeted, sector-specific regulations and governance. In contrast, users can leverage generative AI to produce text, images, speech and even music across a broad range of domains and use cases, making it difficult to establish a one-size-fits-all framework.
 

Organisations' exposure to risk is intensifying, underscoring the urgent need to prepare for future AI regulation and develop robust governance frameworks. So, how can companies navigate this increasingly dynamic technology and regulatory landscape and what steps can business leaders take to establish effective AI governance frameworks? What are the critical questions organisations should ask to brace themselves for the future of AI regulation and the ethical challenges it brings?
 

In this article, we will delve into the world of AI regulations and explore the challenges for organisations when creating effective AI governance frameworks. We will also provide key insights on the steps businesses need to take to ensure they are ready for tomorrow's regulation whilst accounting for the distinctive nature of generative AI.

Woman cycling on bike path at park on sunny day
1

Chapter 1

Navigate the regulatory maze

From ethical principles to tangible policies

As the adoption of AI accelerates, permeating products and services across both private and public sectors, legislators and regulatory bodies worldwide are working hard to keep pace. Countries have been quick to recognise AI as a catalyst for economic growth, but governments also acknowledge its potential impact on citizens, society and our broader environment, as well as the importance of adapting or augmenting existing regulatory frameworks to safeguard established rights.

In the wake of intense public discourse between 2016 and 2019, a global consensus has emerged among governments, businesses and NGOs on the core ethical principles guiding AI usage. The AI Principles of the Organisation for Economic Co-operation and Development (OECD), adopted by the G20 in 2019, exemplify this agreement.4 In an historic move, all 193 UNESCO Member States endorsed the first-ever global standard-setting instrument on AI ethics in November 2021.5

Now, leading nations and international organisations are diligently translating these principles into actionable regulatory approaches. By early 2023, trailblazers in AI regulation, including the EU, US, UK, Canada, Japan, South Korea, Singapore and China, will have either proposed new legislation or published comprehensive guidelines to govern this transformative technology.

Stepping stone crossing on li river Yangshuo
2

Chapter 2

Striking the right balance

How can governments create regulatory objectives without stifling innovation?

Given AI's vast array of application areas and its potential impact on citizens and society, it's crucial to strike a balance between sector-agnostic baselines and sector-specific rulemaking to address different needs and contexts. The question is, what’s the right balance?
 

The pattern is decidedly more sector-agnostic in countries like the US, EU, Canada, Japan, Singapore and China, where policy initiatives establish overarching regulatory objectives, whilst additional sectoral work creates or amends regulations in areas such as medical devices, industrial machinery, public sector AI usage, agriculture, food safety, financial services and internet information services. For instance, the US's Blueprint for an AI Bill of Rights, the EU's AI Act and China's Ethical Norms for New Generation AI provide deep foundations for sector-agnostic policies.7,8,9
 

The primary mechanism for maximising cross-sector coherence within these proposals is the ‘risk-based approach’ to AI regulation. A leading example is the EU’s AI Act, which adjusts the degree of regulatory compliance required based on the classification of risk: whilst most AI poses little or no risk, high risk systems, such as those used in critical national infrastructure or in safety-related applications, will be subject to the strictest obligations.10
 

In contrast, the UK’s pro-innovation approach to AI regulation shifts the balance towards sector-based regulation with additional coordination from government to support regulators on issues requiring cross-cutting collaboration, such as monitoring and evaluation of the framework’s effectiveness, assessing risks across the economy and providing education and awareness to give clarity to businesses.11 The UK’s approach attempts to recognise that regulation is not always the most effective way to support responsible innovation but is, instead, aligned with and supplemented by a variety of tools for trustworthy AI, such as assurance techniques, voluntary guidance and technical standards. 
 

Challenges faced by businesses

In the face of the shifting regulatory landscape, businesses must confront several challenges as they integrate AI technologies into their operations:
 

  • Keeping up with technology changes. As generative AI technologies like GPT-4 continue to advance, businesses must question their underlying assumptions about existing AI risks, which are likely to have been based on discrete use cases and data.
     

  • Keeping up with regulatory changes. Businesses must stay informed and agile as they adapt to the ever-changing AI regulatory environment, which can be a daunting task given the speed at which new policies and guidelines are introduced. 
     

  • Allocating resources for compliance. Ensuring that organisations remain within the boundaries of various AI regulations can be resource-intensive, requiring businesses to allocate time, personnel, finances or independent reviewers to meet a diverse set of requirements.
     

  • Combining innovation with ethical considerations. Companies must recognise that ethical design drives growth and innovation because systems that adhere to ethical principles and regulations tend to be higher performing whilst also protecting customers and society.
     

  • Managing potential liabilities arising from generative AI use: As organisations further integrate AI into business operations, companies must navigate the potential legal liabilities and reputational risks that may arise from deploying these technologies.
     

  • Navigating different ethical regimes as well as cross-border legal and regulatory requirements. For businesses operating internationally, remaining sensitive to and complying with ‘softer’ cultural norms as well as myriad cross-border legal and regulatory requirements can be a complex and challenging undertaking.

Happy business people
3

Chapter 3

Turn principles and policies into trust

A principles-based framework can help organisations create common ethical standards.

In today's rapidly evolving technology landscape, creating trusted AI systems urgently requires organisations to implement a flexible, principles-based approach. Such a framework would offer a systematic way for businesses to ensure that their AI systems adhere to the common ethical standards and best practices demanded by governments, whilst providing clear actions for dealing with the tailored requirements of particular jurisdictions or sector-specific regulators. 

Seven steps for operationalising trusted AI:

  1. Establish a consistent ethical framework.
    Develop an ethical framework tailored to your organisation, drawing on existing principles established by the business, the OECD's AI Principles, or an independent reviewer as a foundation. This framework should provide clear guidance on ethical goals, considerations and boundaries within the context of the company and the industry sector in which it operates.                                            

  2. Create a cross-functional team.
    Assemble a diverse, multi-disciplinary team with representation from various areas, such as domain experts, ethicists, data scientists, IT, legal, human resources, technology risk and compliance. This team will oversee the implementation of your ethical framework, allowing the business to align AI technologies, including generative AI, with pertinent values, such as inclusivity, transparency, robustness and accountability, ultimately fostering trust and driving positive planetary impact.                                                                                                                                                        

  3. Build an inventory of current AI systems.
    The risk and internal audit functions in many organisations remain largely unaware of the scale at which AI systems are deployed across the enterprise. Creating a baseline inventory of data and a consistent framework for assessing the inherent risk of each AI use case and should guide the level of governance and control required to mitigate that risk and maximise value. Available guidance in this area is largely based on draft regulation which seeks to protect human beings and the environment and organisations must not forget to consider commercial risk.                                                                                                                           

  4. Develop clear AI auditing procedures.
    Create a set of guidelines that translate your ethical framework into practical, actionable steps for AI developers and engineers, as well as those who use AI to partially or fully automate their activities. These guidelines should encompass the entire AI lifecycle, from design to deployment, addressing data collection, model development, performance monitoring and third-party risks.                                                                                                                                                                                                                    

  5. Integrate ethics into AI development.
    Embed ethical considerations into every stage of the AI development process, ensuring that developers, engineers, product owners and users understand the legal and ethical considerations of AI they are building or buying and their responsibility to apply appropriate safeguards. This might include implementing ethical checkpoints or gate-based reviews at crucial development milestones and incorporating ethics-based metrics and KPIs to evaluate AI performance and impact on business outcomes.                                                                                                                                                                                                                         

  6. Build awareness and training.
    Ensure that everyone in the organisation, from business leaders to back-office professionals, are aware of AI and the ethical principles associated with its development and use; In our experience, although ethical frameworks are essential, they can sometimes fail to become properly embedded and operationalised when leadership is not fully appreciative of the risks.                                       

  7. Monitor and continuously improve.
    Consider an independent, regular audit of AI systems to assess their ethical performance, addressing any shortcomings or adverse effects. Maintain a central inventory of AI systems, to support risk management and regulatory compliance. Additionally, gather feedback from stakeholders and users to refine the AI auditing guidelines, ensuring that the organisation’s ethical framework remains relevant and up to date.


Summary

In the face of a patchwork of proposed regulations and the rise of generative AI, businesses face the daunting challenge of building trust in their AI-driven products and services. This requires a proactive approach to managing risk and a culture of responsibility. A principles-based framework for trusted AI offers a flexible solution to navigating the complexities of AI ethics and regulation. 

By doing so, organisations can demonstrate their commitment to transparency, accountability and fairness and drive AI-powered innovation that benefits stakeholders and shapes a more equitable future.

GPT-4 was used for help with wording, formatting, and styling throughout this work.
 

About this article

Authors

Related articles

Three steps for building customer trust in the metaverse

Building trust in the metaverse will be crucial for consumer confidence and adoption. Download the report now.

Why innovation leaders must consider quantum ethics

We explore the importance of acting on quantum ethics to maximise the opportunity posed by quantum technologies, whilst minimising risk. Learn more.

For CIOs, it’s about the people, not the technology

As the orchestrator of ecosystems, CIOs can connect people and technology to help transformations succeed. Learn More.