EY helps clients create long-term value for all stakeholders. Enabled by data and technology, our services and solutions provide trust through assurance and help clients transform, grow and operate.
At EY, our purpose is building a better working world. The insights and services we provide help to create long-term value for clients, people and society, and to build trust in the capital markets.
Tech Trend: Responsible AI: mitigating risks and building trust
In the first episode of our Tech Trends series for this year, we explore the concept of Responsible AI — a crucial aspect of Generative AI evolution. Kartik Shinde, Partner, Cybersecurity Consulting, EY India , joins us in discussing the risks and benefits of AI advancements, and why building trust is important. He also sheds light on the transition from traditional AI to Gen AI, the significance of risk-based regulation, the value of human-centered design, and the role of technology in ensuring reliable AI outputs.
For your convenience, a full text transcript of this podcast is available on the link below:
Pallavi: Welcome to EY India Insights Podcast. I am Pallavi Janakiraman, and we are thrilled to introduce our new podcast series on Tech Trends for this year.
Join us as we discuss the upcoming trends in the tech landscape with our leading tech consulting Partners. In today's episode, we delve into Responsible AI. As we navigate the Generative AI (GenAI) revolution, it is becoming increasingly crucial to emphasize Responsible AI by ensuring ethical contribution, transparency, and accountability.
To dive deeper into this topic, we are joined by Kartik Shinde, Partner, Cybersecurity Consulting, EY India. With over 20 years of experience, Kartik is among the leading cybersecurity consultants for financial services clients, ensuring robust information security strategies for banks and financial institutions.
Thank you, Kartik for joining us today and welcome to our podcast.
Kartik: Thanks, Pallavi. Pleasure to be here.
Pallavi: There is a lot of hype around Gen AI and the concerns about its potential misuse are also growing. What are the biggest risks associated with the rise of Gen AI and why is building trust so crucial?
Kartik: I will list down some of the biggest risks:
Misinformation and fake content: We have seen that generating content using Gen AI is as easy as writing the right prompt. But the same thing can be used like a deepfake technology, which poses a significant risk in propagating misinformation. Deepfake videos and images can be indistinguishable from the real ones, leading to the spread of false information and manipulation of public opinion. For instance, deepfake videos of public figures, political figures or of celebrities saying or doing things they never actually do could influence public perception or even disrupt political processes.
Privacy concerns: Gen AI or malicious use thereof can threaten privacy by generating fake personas or by creating fabricated content that invades an individual’s privacy. For example, AI algorithms can be used to generate fake social media profiles with realistic images and information, which can then be used for an identity theft or even social engineering attacks.
Potential for malicious use: Tools like ChatGPT have put enough guardrails around the model to make sure that there is no one who can craft malicious output out of the entire app. In the initial days, there were people who could craft a prompt, which could generate probably a well-sounding, convincing phishing email. But over time, it is something that has been curbed. But there has also been an evolution of malicious GPTs, adding to the complexity of AI security concerns. These malicious Gen AI models can be exploited for malicious purposes, which includes creating false documents, impersonating individuals, crafting convincing phishing emails, writing exploits or certain vulnerabilities that exist in products and organizations.
Another example could be an AI-generated text. Highly convincing phishing emails trick users into revealing information or even downloading malware. Hence, building trust is crucial because trust is fundamental for widespread adoption and acceptance of these AI technologies.
Without trust, people may be hesitant to interact with the AI systems or even actively resist implementation, which could impede the realization of the technology’s potential benefits.
Building trust also requires transparency, accountability, and responsible use of AI technologies to mitigate the risks associated with their misuse. At EY, we have an entire trusted AI framework, which has the entire trusted AI solution lifecycles, which helps our clients implement Responsible AI technologies and solutions that can be rolled out after taking into consideration all the risks that we talked about.
Pallavi: Thank you, Kartik, for these valuable inputs. What is the difference between the traditional AI and the new wave of Gen AI? What makes the new wave of Gen AI more powerful and potentially riskier?
Kartik: Traditional AI typically involved supervised or unsupervised learning algorithms that could analyze data to make predictions or decisions based on predefined rules or patterns. Examples include a chatbot that is very specific and tailored to a particular topic like helpdesk system or recommendation system, or it could be an image classifier or natural language processing models.
Gen AI goes beyond the traditional AI like creating new content such as images, videos, text that mimics human-like creativity. Unlike traditional AI, which focused on analyzing existing data, Gen AI can generate entirely new content based on learned patters and associations. On the flip side, examples could include deepfake videos, text generation models and image synthesis algorithms. What makes it more popular and riskier is its ability to create highly realistic content.
While traditional AI systems were constrained by the data they were trained on and the predefined rules they followed, Gen AI has the potential to create content that is indistinguishable from reality, leading to concerns about its misuse or deception, manipulation, and other malicious purposes.
Pallavi: Thank you, Kartik. Can you explain the concept of risk-based regulation and how it could be applied to different AI applications?
Kartik: Risk-based regulation involves assessing the potential risks associated with AI applications and implementing regulations or guidelines accordingly. This approach recognizes that not all AI applications pose the same level of risk and that regulatory measures should be tailored to address specific risks based on factors such as potential impact on individuals, society, or the environment.
For example, a high-risk AI application, such as the one used in autonomous vehicles or the one used in medical devices or diagnostic systems, may be subject to stricter regulatory requirements. This could include testing, certification, and ongoing monitoring to ensure safety, reliability, and ethical use. In contrast, low-risk applications such as a recommendation algorithm on an e-commerce platform may be subject to less stringent regulatory oversight, allowing for innovation and flexibility, while still addressing potential risks such as bias, discrimination or privacy violations.
Pallavi: We would like to talk about human-centered design and ethical consideration in AI development and deployment. Why do you think these aspects are so crucial in shaping Responsible AI solutions?
Kartik: Human-centered design emphasizes the importance of designing AI systems with the end user in mind, considering their needs, preferences, and values. By involving users in the design process and considering their feedback and perspectives, the AI systems can be better tailored to meet their needs and expectations. Involving ethical considerations in AI development and deployment ensures that AI systems are designed and deployed in ways that prioritize fairness, transparency, accountability, and respect for human rights. This includes addressing biases, ensuring transparency and explainability, and safeguarding privacy and data security throughout the AI lifecycle. An important point around human-centered design and ethical considerations that are also crucial for building Responsible AI solutions, is that they should not only perform effectively, but also align with societal values and norms. By incorporating these aspects into the design and development process, AI systems can gain user trust and acceptance, thereby reducing the risk of unintended consequences or harm and contribute to positive societal outcomes.
Pallavi: Thank you, Kartik. Technology and data quality control play a significant role in mitigating these risks. Could you explain the importance of ensuring reliable AI outputs?
Kartik: Technology and data quality controls play a significant role in mitigating risks associated with AI. This is done by ensuring the integrity and reliability of the data used for training and decision making. This process involves implementing processes and mechanisms to assess and improve the data quality, detect and correct biases, and validate the performance and accuracy of AI modules.
Examples of technology and data controls include data pre-processing techniques to remove noise and outliers, biased detection algorithms to identify and mitigate biases in training data, and model interpretability techniques to enhance transparency and explainability of AI decisions.
Pallavi: Thank you, Kartik. Considering the challenges that AI creates, how do you see AI addressing some of these issues? What are your thoughts on using AI to detect and remove biases in other AI systems?
Kartik: AI can be leveraged to address some of these challenges associated with its own deployment, such as biases and fairness concerns. By analyzing patterns, data and the decision-making process, you could actually write algorithms that can detect and mitigate biases in the AI system itself, thereby promoting fairness, transparency and accountability.
For example, a fairness-aware machine learning technique can be used to identify and mitigate biases in AI algorithms used for tasks such as hiring, lending, or even criminal justice decision-making, helping ensure fair and equitable outcomes for all individuals.
While AI holds promise in addressing bias and promoting fairness in AI systems, it is important to recognize that it is not a panacea. Human oversight and intervention are still essential to ensure that AI systems are used ethically and responsibly. Additionally, ongoing research and collaboration are needed to develop and implement effective strategies for detecting and mitigating biases in AI systems across different domains and applications.
Pallavi: Thank you, Kartik, for joining us today and sharing your valuable insights on Responsible AI. It has been an enlightening conversation for me and all our listeners.
Kartik: Thank you, Pallavi.
Pallavi: On that note, thank you to all our listeners for joining us for this episode of Tech Trends series. Stay tuned for more discussion on the latest in Tech Trends. Until then, if you would like us to cover any specific topic for discussion, please feel free to share it with us on our website on mail to us on markets.eyindia@in.ey.com. Thanks for tuning in and goodbye.
If you would like to listen to our podcasts on the go:
Discover how AI is revolutionizing the workplace with EY's insights on next-gen employee experiences. Learn strategies to enhance engagement & productivity in our Tech Trend 06 update.
Explore the concept of Responsible AI and discover how to build a sustainable framework for ethical and impactful AI development. Learn more about tech trends 2024.