India introduces a "techno-legal" framework for AI governance that strikes a balance between risk and creativity
India Releases White Paper on AI Governance, Proposes Techno-Legal Framework
India’s Office of the Principal Scientific Adviser (PSA) has released a white paper on artificial intelligence (AI) governance, proposing a “techno-legal” framework to balance innovation with risk.
According to the PSA’s press release, the paper titled “Strengthening AI Governance Through Techno-Legal Framework” outlines a structured institutional approach to ensure the safe, trusted and responsible development of AI in the country. It stresses that effective implementation is key to the success of any AI policy.
The proposed framework aims to strengthen India’s AI ecosystem by bringing together industry, academia, government bodies, AI developers, deployers and users. A key proposal is the creation of an AI Governance Group (AIGG), to be chaired by the Principal Scientific Adviser. The group will coordinate among government ministries, regulators and policy bodies to address gaps and fragmentation in current AI governance.
The AIGG will work to establish uniform standards for responsible AI, promote beneficial use of AI across sectors, identify regulatory gaps and recommend legal changes where required.
To support this group, a Technology and Policy Expert Committee (TPEC) will be set up under the Ministry of Electronics and Information Technology (MeitY). The committee will include experts from law, public policy, machine learning, AI safety and cybersecurity, and will advise on national priorities and global AI policy developments.
The framework also proposes the establishment of an AI Safety Institute (AISI), which will evaluate, test and ensure the safety of AI systems deployed across sectors. The institute will support the IndiaAI mission by developing tools to address issues such as content authenticity, bias and cybersecurity, and will produce risk and compliance reports to guide policy decisions.
In addition, a national AI Incident Database will be created to track safety failures, biased outcomes and security breaches after AI systems are deployed. The database will be based on global best practices but adapted to India’s regulatory and sectoral needs, with reports submitted by government bodies, private companies, researchers and civil society organisations.
The white paper also encourages voluntary industry commitments and self-regulation, such as transparency reports and red-teaming exercises. The government plans to offer financial, technical and regulatory incentives to organisations that adopt responsible AI practices.
Overall, the framework aims to promote consistency, continuous learning and innovation while providing clarity and confidence to businesses developing and deploying AI in India.