top of page
Insights > Tech > AI 

Regulation of Artificial Intelligence in India: Mapping the Future

MARCH 25, 2021 - 2 MINS READ
Share To
Image by Michael Dziedzic

India is setting up the stage for on the Ethical Use of AI and Data thereby conforming to the global 'Ethics for AI' debate. Indian Government has not issued a national policy on a regulatory framework for AI, however, policy documents issued by the NITI Ayog portrays a clearer picture of a self-regulatory mechanism in the coming future. These policy documents include the following (“Indicative Policies”):

 

(i) "National Strategy for Artificial Intelligence #AiForAll" (issued June 2018)

 

(ii) "Working Document: Towards Responsible #AIforAll – Part I" (issued August 2020)

 

(iii) "Working Document: Towards Responsible #AIforAll – Part I" (issued November 2020)

​

(iv) Approach Document for India Part 1 – Principles for Responsible AI (February 2021)

​

Indicative Policies suggests that existing laws in India seems sufficient for challenges that AI pose directly to the Indian society. These challenges are termed by the Indicative Policy as "System Considerations". It is suggested that Sector-specific modifications and alignments are required in existing laws to face these challenges effectively. There are also other challenges which impact the society indirectly as deep fakes, loss of jobs, psychological profiling, and malicious use. For challenges having indirect impact such as loss of jobs, the Indicative Policy suggests skilling, adapting legislations and regulations, which would in return harness new job opportunities in the Indian Market. It is interesting to see that the recommendations on dealing with malicious use of AI for spreading hate or propaganda, is to use the technology for proactive identification and flagging.

​

Indicative Policies also explains societal impact based ethical challenges that can be face with the application of AI. It recognizes the issues of 'Black Box Phenomenon', personal data privacy, selection bias, profiling and discrimination risks, data collection without proper consent and non-transparency in some AI solutions. It also identifies public reputational issues of mass public fear that Big-Tech companies inappropriately harness large consumer data and use it to understand consumer behavior; and also that large DATASETS are being developed by these companies.

​

Indicative Policies suggests ethical and conscious development of 'XAI' or explainable AI keeping in mind the concepts of 'Differential Privacy' by implementing 'Federated Learning' whereby ‘data trusts’ are created for uncompromised data sharing. Indicative Policies also suggests three broader principles: (i) Explainability using Pre hoc and Post hoc techniques; (ii) Privacy and data protection using federated learning, differential privacy, zero knowledge protocols or homomorphic encryption; and (iii) Eliminating bias and encouraging fairness using such as Tools such as IBM's 'AI Fairness 360', Google's 'What-If' Tool, Fairlearn and open source frameworks such as FairML.

 

Suggestions on Self-Regulation & Self-Audit

​

Indicative Policies suggests implementation of following essentials for self-regulation.

  

  • Problem Scoping: Assessing potential harm from AI System,

  • Data Collection: Keeping track of known sources of data, and steps to ensure privacy and safety,

  • Data Labelling: Tracking human variability and biases.

  • Data Processing: Ensuring masking of personal and sensitive data.

  • Training: Training on fairness goals, protection of sensitive & personal data.

  • Evaluation: Evaluation of system's fairness goals, adversarial inputs, error-rates across subpopulation groups.

  • Deployment: Ensuring easy accessibility of grievance redressal mechanism, impact assessment of real-world bias.

  • Dynamic assessment: Dynamic monitoring of fairness goals and Ensuring accessibility by third parties to audit and probe, understand and review the behavior of the system.

For more on the topic, get in touch with us at info@tbalaw.in

bottom of page