Robot
Robot

Will AI change the world for good, or just change it?


Accelerated interest in artificial intelligence means that individuals and businesses will need to understand the ethics, risks and impacts of what’s being unleashed.

 

Artificial intelligence’s (AI’s) impact on the big data landscape is unfolding in quantum leaps. International Data Corporation (IDC) says that worldwide revenues for big data and business analytics will grow from US$150.8 billion in 2017 to more than US$210 billion in 2020, at a compound annual growth rate of 11.9%.1

Large data sets and deep learning, a sub-set of AI, have recently emerged as the hottest tech trend with such tech giants as Google, Facebook, Amazon, IBM, Intel and Microsoft. Lined with deep pockets, they’re all actively investing in acquiring talent and developing AI hardware and software.

This accelerated interest in AI, in turn, will lead to a plethora of new risks, where the emergence of these new predictive models to make business decisions will increasingly come under the risk management accountability of the chief financial officer (CFO) or chief risk officer (CRO).

AI everywhere

The Internet of Things and AI are being woven into business practices and processes at an ever-faster pace and are set to disrupt every business and every industry. We’ve already seen the impact of credit scoring on financial services, and insurance firms evaluating customer risk profiles based on credit transactional history or payment history. AI will take that another step-up with the use of unstructured data and client interactions through chatbots.

As AI is enabling machines to interact with customers at an even more intimate level, marketers are using data not only to understand past trends, but also to predict future behavior. Predictive analytics is providing brands with the ability to automate marketing responses in any given customer situation, such as live webs, with intelligent agents connecting the dots and finding products, content or medical treatment specifically targeted to the consumer.

According to Narrative Science2, roughly 10% of financial services organizations are using AI to compete with their peers and identify opportunities in their data that would otherwise be missed. AI in banks and credit unions is still in the early stages. These organizations are using such AI methods as predictive analytics, recommendation engines, voice recognition and response intelligence.

 

Based on the Narrative Science survey, 12% of organizations weren’t yet using AI, as they felt it was too new and untested, or they weren’t sure about the security. Other AI challenges — such as fear of failure, siloed data sets and regulatory compliance — were also cited. Another key challenge for many organizations is that there is no clear internal ownership of testing emerging technologies. However, only 6% of those surveyed had an innovation leader or an executive dedicated to testing new ideas and processes.

 

This is just the tip of the iceberg. There are new risks and challenges that will emerge of which we haven’t even contemplated yet.

 

So what is the AI risk?

 

The risk of AI is that the development of the technology will outpace our ability to build the required governance and control structures. We’re seeing evolving business model operating structures where mathematical formulas — or ALGOs, as they’re called — will inform and make decisions that directly affect the human race in substantive ways. The development of AI is happening a thousand times faster than the invention of the railroad. We’re on a fast track to becoming humans who cannot live without AI.

 

Are we ethically ready? Are we managing risks? Do we even understand what’s truly being unleashed? Are we building in a back-out plan?

 

 

Episode 1: Transparency of use

In the first video of this series, Dr. Cindy Gordon and Cathy Cobey explore in the digital age, who has the upper hand - consumers who use create data or corporations that use that data? This episode focuses on data management and transparency of AI use.

Episode 2: AI governance and accountability

How can governance functions keep up with fast-paced development in AI? Who should be held accountable? Join Cathy Cobey and Dr. Cindy Gordon as they discuss these issues in episode two of Managing the Risks of AI.

Episode 3: Explainable AI

When AI can’t explain itself, are you relying on blind trust? In this episode, we explore the roles and responsibilities around the accountability of AI from the C-Suite and board of directors.

Episode 4: Being AI ready

Why is the knowledge gap at the top slowing down the development of AI? In this episode, we discuss the governance of AI, the role of education to drive accountability in AI, and the importance of collaboration in episode four of Managing the Risks of AI.

Episode 5: The impact of diversity in AI

What are the challenges facing diversity in AI? What are the implications? This episode looks at the impact of diversity in developing AI including where investments are currently being made and what are some of the implications of the lack of women and the overall lack of diversity in developing AI.

Episode 6: Gender bias in AI

Can AI be better than humans in making unbiased decisions? The final episode looks at the implications of the lack of diversity in those who program AI and the lack of diversity in the actual data sets on the development of AI. 

The views expressed in this video by SalesChoice do not necessarily represent the views of EY.



Summary

In our video series Managing the Risks of AI, Dr. Cindy Gordon, CEO and Founder of SalesChoice, and Cathy Cobey, a partner at EY, have come together to explore the rise of AI and the impacts it has on everyday business decisions. Gordon’s understanding of data and analytics, together with Cobey’s experience with managing technology risks, underpin a robust discussion on who has the upper hand in this digital age: customers, organizations or machines.


About this article