Putting intelligence back in ai hero image

Putting the intelligence back in AI – prepping your organization to be future ready

Related topics

Contributor:  Charitarth Bharti

 

ChatGPT and generative AI push new scenarios: Canadian businesses must take a stance and safeguard data/privacy with proactive precautions.


In brief 

  • Canadian businesses should protect data and privacy for new regulations debuting this year.
  • AI technologies like GPTs raise governance questions for policymakers, who want to define rules to avoid risks such as liability and cybercrime. 

With software like ChatGPT hurtling us into new scenarios, it may be time for Canadian businesses to take a stance on generative AI and implement proactive precautions to safeguard data and privacy.

As Canada gets ready for new privacy regulations set to debut this year, many organizations may be overlooking a very real trojan horse slipping by their defences and setting them up for considerable risk and liability — the uncontrolled use of artificial intelligence (AI).

Not long ago, the world saw the first versions of generative pre-trained transformers (GPTs), a family of large language model (LLM) solutions, become publicly available and create new expectations around the benefits they could offer for personal and corporate use. Since then, we have witnessed the evolution of the GPTs as they have blazed their way into the mainstream and gained popularity as useful employee productivity tools, passing even the most challenging tests for human beings.

However, there is no certainty on the sources of information used to train these systems or on the legitimacy and accuracy of the data they draw from. Without proper oversight and review, AI solutions can expose companies to potential risk, from liability associated with fraud, privacy and copyright infringement to social engineering attacks and cybercrime.

On the brink of launching Canada’s first AI legislation — Bill C-27’s Artificial Intelligence and Data Act (AIDA) — plans appear to be underway to establish common requirements with respect to the design, development and use of AI. But like the similarly intentioned AI Act in Europe, emerging AI technologies like GPTs have policymakers questioning governance and working on defining rules for their implementation.

There’s the question of scope — how to define AI so that it’s subject to regulations and laws. Minimizing risks will be an important factor. This poses an entirely different challenge, as predicting how it will evolve may prove difficult. And it will be important to continually redefine guidelines — like AIDA and the AI Act — for the use and potential misuse of these technologies, since regulation often can’t keep up with the speed of technological change.

Putting stricter regulations on high-risk applications and foundation models will be a start, providing risk and quality management requirements subject to external audits. But the regulatory process is not as easy as it seems, with discussions around parameters — like whether to ban biometric recognition systems — having held up discussions on the AI Act. With the ironing-out process raising more and more questions, both regulatory missives will likely be out of date by the time they are put into action.

But action will be critical. And time is of the essence. A group led by the Future of Life Institute recently released a petition requesting AI labs to immediately pause the training of AI systems, mainly those more powerful than the latest generation GPT-4, citing “profound risks to society and humanity” resulting from technologies that “not even their creators can understand, predict or reliably control.”

But controlling developers alone is not the solution. Clear rules will be necessary to allow access to AI technologies and their many benefits, while reducing risk to meet corporate appetites.

The AI odyssey

While there’s no shortage of science fiction movies with “AI gone wild” plots, lamenting the dangers of human-competitive intelligence, it’s early still to be foretelling “Skynet”-level events. Seeing how far ChatGPT has come since its initial rollout in November 2022, however, should be sufficient cause to pause.

GPT technology is gaining traction beyond open-source resources and being integrated into legitimized for-purchase business offerings. OpenAI recently rolled out WhisperAPI, a new software to aid speech-to-text transcription and translation, and tech giant Microsoft has integrated GPT into its upcoming Security Copilot program, using the technology to analyze volumes of data and help detect security breaches.

With new plugins and extensions being rolled out to help ChatGPT “learn” and optimize performance — including the ability to enhance natural language or better understand emotions — parameters and vigilance will be critical as we await regulation. In the absence of controls, it will be important for organizations to define how and when these tools can — or should — be considered, providing guidance and oversight to prevent private or sensitive company information from being leaked and absorbed and repurposed across the GPT universe.

An Australian Responsible AI Index report found that despite 82% of companies believing they were taking a responsible approach to using AI, fewer than a quarter had measures in place to deploy AI responsibly and protect their business.

With so many unknowns, organizations will need to invest in responsible strategies as AI matures and take precautions today because there’s no avoiding it — the future is now.

Confidence in the mission

Balancing the risk with AI’s many rewards is important. But where to start? Here are three steps you could be taking today to get a jump and help your organization use AI responsibly:

Be prepared for the scrutiny AI-powered services attract.

Marketing departments are typically attracted to AI-based services because of their many benefits — efficiency, cost savings and a reduction in human-intensive labour, for example. But whether you are promoting your own products or those of third-party suppliers, if you don’t need to call out AI services, avoid it.

AI comes with significant risks — from privacy, consumer protection, copyright and intellectual property considerations to compliance and legal obligations that need to be met. Questions — like who owns repurposed information or even code generated from GPT’s data sources — can only be expected to grow as industry euphoria and AI capabilities grow.

Review your company’s policies and standards related to AI and acceptable use.

Start by clearly defining acceptable use policies to manage employee and contractor interactions or ban their use if the risk is too great. Update training to educate employees on terms of use and remind them of cybersecurity strategies. With phishing emails growing increasingly sophisticated and believable using LLM software, it will help in reducing security and privacy risks. And consider implementing default settings to restrict unacceptable use or requiring documentation to identify when a GPT solution has been used on work.

Use a trusted operator and third-party auditor to assess all AI systems.

AI features are already being integrated into existing, popular business software programs. But due to its very nature, generative AI can expose a company to legal, commercial and reputational risk if it’s not used responsibly. User prompts can lead to employee, customer and organizational data theft or strategic and proprietary company information inadvertently making its way into the public domain.

To address criticism and potential regulatory scrutiny, ChatGPT introduced controls that turn off “chat history,” preventing “history disabled” conversations from being used to train OpenAI’s underlying models and being displayed in the history sidebar. Instead, they’re stored on the company’s servers, reviewed on an as-needed basis for abuse and deleted after 30 days.

Additionally, a business subscription tier has been added for professionals looking to control their data and enterprises seeking to manage end users. With competition in this space increasing, alternative models like StableLM by Stability AI and open-source models like Open Assistant are becoming available to businesses to address preferences including performance, transparency and privacy controls.

It’s important to recognize that not all AI products and operators are built the same. While key indicators to look for include privacy and data usage practices, an independent third party can help identify areas of concern and provide recommendations to avoid risk. By engaging an AI auditor, organizations can strengthen trust with stakeholders, protect client information, safekeep confidence and company reputation, and futureproof business systems as the regulatory environment evolves.

In a world grown increasingly reliant on technology, it will be critical for businesses to assess and reassess the value these tools bring and be conscious of new or potential risks, as with any other technology. Remaining attentive to new challenges and being nimble enough to erect guardrails will be critical to the mission as AI continues to evolve and deliver new capabilities.

Summary

Canadian businesses using AI, specifically GPTs, may face significant risk and liability due to uncertain sources of information used to train these systems, which could lead to fraud, data breaches, and cybercrime. As Canada prepares to implement its first AI legislation, organizations should invest in responsible strategies to mitigate potential risks. 
 

About this article