Saturday, July 27, 2024
18.3 C
Los Angeles

Anti-Corruption Protesters Arrested in Uganda

On July 23, police in Uganda’s capital...

Special Advisor on International Disability Rights Travel to South Korea, Brunei, Malaysia, and Cambodia 

Special Advisor on International Disability Rights (SAIDR)...

No Justice for Rights Defender’s Death in Kyrgyzstan Prison

It has been four years since Azimjon Askarov,...

How to balance risk and governance in GenAI

AI/MLHow to balance risk and governance in GenAI

Tarannum: Hello and welcome to the EY India Insights podcast. I am your host, Tarannum. In our latest episode of the ey.ai podcast series, we will explore the timely topic of trust in GenAI, particularly in the Technology Risk Services industry. While Generative AI holds tremendous potential, it is also fraught with risks and limitations associated with the technology.

What if things go wrong? We need to have a plan. To facilitate our discussion further, we have with us Abbas Godhrawala, who is a Partner in the risk consulting practice with EY India. With more than 22 years of experience, Abbas is a seasoned professional with expertise in governance, risk and compliance, IT consulting, information security, digital risk, and more. Abbas has played a pivotal role in helping clients execute strategy and overarching programs that allow for rigorous and structured decision-making, which is resilient in the face of an ever-evolving industry. He has further helped clients to identify and govern potential organizational business, information security, and operational risks.

Thank you, Abbas, for joining us in this episode.

Abbas Godhrawala: Thanks a lot, Tarannum.

Tarannum: So, Abbas, let us get right into it. As a technology risk professional yourself, you must regularly confront the regulatory aspects of Generative AI across different domains. How do you see its impact across various sectors?

Abbas Godhrawala: When we consider the impact of GenAI in the overall scheme of things, I think there are a lot of profound, transformational changes unfolding as businesses increasingly adopt GenAI technologies to drive innovation and efficiency. Regulatory bodies are faced with the challenge of adopting existing frameworks or introducing new regulations to address the unique risks associated with this advancement. The risks in GenAI adoption have stirred up considerable regulatory responses worldwide. For instance, we have witnessed the introduction of new AI standards through initiatives such as President Biden’s Executive Order on safe, secure, and trustworthy AI. We have seen the NIST roll out the NIST AI Risk Management Framework.

We also have the European Union, which is enacting the EU AI Act, while the European agency for cybersecurity is also having active discussions on AI cybersecurity. And let us not forget High Trust, which has just recently released the latest version of the Common Security Framework, which now includes specifically tailored AI-related risk management. Adding to this regulatory landscape, the international standard, which is the ISO body, has also come up with ISO 42001, highlighting the increased regulatory focus on handling AI risk. So there are various frameworks like NIST, AIRMF, ISO 42001, and the EU AI Act. All these are actively engaging with industry stakeholders to establish robust guidelines for the responsible development and deployment of AI technologies, including GenAI. As tech risk consultants, it is important for us to align our practices with these frameworks and collaborate with our clients to ensure adherence to regulatory requirements while leveraging the transformative potential of GenAI in a responsible manner.

Tarannum: Thank you for those insights, Abbas. Now, the tech risk sector specifically revolves around evaluating risks and making informed decisions. This focus aligns perfectly with the potential of generative AI to improve efficiency in underwriting processes. However, widespread adoption also raises significant questions around compliance and governance. What, in your opinion, are the risks that organizations need to manage when leveraging GenAI capabilities?

Abbas Godhrawala: GenAI is a hot topic in the tech world, promising incredible potential for automation, content creation, and innovation. However, it is essential to acknowledge the weight of this potential and the accountability it entails. AI models, including GenAI, can be susceptible to manipulation through data poisoning or prompt design for misleading output. Addressing these risks is key to ensuring responsible and trustworthy AI development. Here is where the real challenge comes in. One big one is bias. These models are trained on massive datasets, and if the dataset is biased, guess what? The AI picks up on that bias and perpetuates it. Let us say you have a hiring tool that learns from past hiring decisions. If those decisions were biased, the AI could continue that bias. That is a significant ethical concern. Another challenge is transparency.

These models can be complete black boxes, making it hard to understand how they arrive at their decisions. And let us not forget security. Generative AI can be misused to create deepfakes, hallucinations, or manipulate data, which can be serious threats. Organizations need to have strong data security controls in place to manage these risks. The key is to ensure responsible and ethical use of GenAI. Organizations need clear guidelines and frameworks to promote fairness, transparency, and accountability. Finally, transparency and responsible development are essential for building trust between stakeholders and prospective clients. Effective governance is the linchpin of successful implementation. Organizations need dedicated leadership overseeing GenAI development and implementation, aligning it with risk management strategies. A comprehensive framework that identifies potential risks, outlines mitigation strategies, and establishes monitoring controls is also critical.

Tarannum: So, Abbas, how does the pace of Generative AI innovation surpass the capacity of current global initiatives to effectively regulate the technology and its applications? What proactive measures do you think organizations can implement to ensure responsible and trustworthy utilization of AI?

Abbas Godhrawala: Certainly, it is evident that organizations are exploring opportunities to enhance efficiency, drive innovation, and remain competitive in their respective industries. However, with these opportunities comes the recognition of the challenges that GenAI poses, along with the need to enforce governance guardrails for prioritizing responsible and ethical use of AI. At EY, we prioritize proactive measures to ensure responsible and effective utilization of AI through our GenAI Risk and Governance Framework. We kick-start our process by assessing an organization’s existing policies, procedures, and security standards, gaining a deep understanding of their approach to AI. With this insight, we craft or refine policies and procedures tailored specifically to GenAI, ensuring they seamlessly integrate with our risk and governance framework. Aligned with the EY responsible AI principles such as fairness, transparency, privacy, etc., we conduct a thorough risk assessment on the GenAI solution by identifying potential risks and suggesting corresponding controls to mitigate them. Clients are looking at doing gap assessments with respect to the NIST AIRMF or the ISO 42001 standard, as well as building their overall GenAI risk and governance framework. Ultimately, our goal is to empower clients to leverage AI with integrity, accountability, and compliance at the forefront of their operations, thereby fostering trust and ethical AI practices in the rapidly evolving technological landscape.

Tarannum: Thanks, Abbas. And, lastly, what is the way ahead in terms of incorporating AI within organizations and leveraging its innovative capabilities to help transform businesses?

Abbas Godhrawala: The compliance landscape for GenAI is constantly changing. To stay ahead of the curve, companies should actively track developments and conduct thorough assessments of how their GenAI platforms and systems might impact their ability to comply with relevant standards. Building a robust risk and governance structure is crucial in helping businesses understand and meet the changing regulatory requirements. By staying abreast of technological developments and regulatory updates, the firm can provide valuable guidance to their clients, enabling them to navigate the complex landscape of regulatory compliance effectively. Human oversight remains irreplaceable, even with automation. Humans should be involved in the critical decision-making process and be responsible for model training and validation. By addressing these challenges and implementing robust governance and regulatory frameworks, organizations can harness the power of GenAI to enhance their tech risk management practices and drive innovation.

Tarannum: Excellent. Thank you, Abbas, for this insightful conversation, and I am sure our listeners have learned immensely from this timely discussion.

Abbas Godhrawala: Thanks. It has been a pleasure.

Abbas Godhrawala: Thank you to all our listeners for tuning into this conversation. Your feedback and questions are invaluable to us. Feel free to share your thoughts on our website or email us at markets.eyindia@in.ey.com From all of us at EY India, thank you for tuning in.

Story from www.ey.com

Disclaimer: The views expressed in this article are independent views solely of the author(s) expressed in their private capacity.

Check out our other content

Ad


Check out other tags:

Most Popular Articles