Last week, during Global India AI Summit, the Ministry of Electronics and Information Technology (MeitY) hosted a pivotal session focussing on ‘Ensuring Trust, Safety, and Governance in AI’ under the broader initiative of IndiaAI. This session highlighted the critical need for a comprehensive and balanced approach to AI governance, emphasizing the convergence of technology, policy, ethics and societal needs.
Given the rapid advancement of AI, effective governance is crucial to promote responsible growth and integration. Building on insights from last year’s Global Partnership on Artificial Intelligence (GPAI) Summit, which examined international AI governance landscapes, this year’s focus shifted to a domestic perspective. The aim is to develop an AI governance framework that addresses both global and local contexts.
India is proactively addressing AI governance through the Safe and Trusted AI Initiative under the IndiaAI Mission. This mission seeks to encourage responsible AI practices, with a strong emphasis on fairness, transparency, and security. MeitY’s recent call for expressions of interest invites organizations to explore and propose frameworks for responsible AI across various sectors, covering topics such as machine learning, synthetic data generation, and algorithmic fairness.
Objectives and Contributions
The session aimed to:
- Highlight the components and objectives of the Safe and Trusted AI Initiative.
- Explore India’s role and contributions in international forums like the G20 and GPAI.
- Integrate ethics into AI development, aligning with UN Sustainable Development Goals (SDGs).
- Balance technological innovation with governance to protect citizen interests.
In his keynote address, Shri S. Krishnan, Secretary (Electronics & Information Technology), Government of India emphasized the imperative to approach AI governance with caution and responsibility. He acknowledged the historical fears associated with technological advancements but highlighted the need for regulatory guardrails to harness AI’s potential while mitigating risks. Drawing parallels with past technological revolutions, he noted how societies have adapted and regulated to manage new technologies’ impacts.
Shri S. Krishnan stressed the multifaceted fears around AI, from potential job losses to societal harms like misinformation and privacy invasion. He underscored the importance of global and multi-stakeholder consultations to develop sensible and effective AI regulations.
India’s approach to AI regulation is deliberate and consultative, learning from global experiences to formulate policies that address unique national contexts. The Data Protection Act and existing laws addressing misinformation and privacy are part of this evolving framework. Shri S. Krishnan also highlighted the responsibility of AI developers to test and label synthetic content, ensuring transparency and accountability.
Despite the concerns, Shri S. Krishnan remained optimistic about AI’s potential to drive economic growth and innovation, particularly in India. He called for a techno-legal framework that balances regulation with technological advancements, ensuring AI’s benefits are fully realized while minimizing harm.
The session underscored the importance of thoughtful, inclusive, and balanced AI governance, aligning technological progress with societal values and ethical standards.