Summary and Analysis of NITI Aayog's Responsible AI Report on July 21, 2020
M Tanvi
Alumnus,
National Law University, Odisha
Isha Prakash
Student,
Government Law College, Mumbai
The term ‘Artificial Intelligence’ or AI was coined in 1956 at Dartmouth University; however, it is only in the 2000s AI was practically implemented by IBM.[1] The rest is history! The global AI market is expected to grow approximately 154% and would reach a size of 22.6 billion U.S. dollars by the end of this year.[2] These figures would only rise in the future. Most of us have an AI system installed for our personal usage, be it a chat-bot, Amazon’s Alexa, Apple’s Siri or any other voice assistant. There are more such AI systems with complex features which are on the verge of developing. AI will soon formulate an integral part of all sectors of businesses and will boost India’s annual growth by 1.3% by 2020. [3]
This gives rise to some pertinent questions like: How are AI systems governed in India? Who would be responsible for any default in the part of the AI system- the developer, the company using the AI or the AI itself? Can an AI system infringe our privacy? To tackle these Niti Aayog released a working paper on ‘Responsible Artificial Intelligence (AI)’ on 21st July 2020.[4] The following are the key points of discussion in the working paper:
Challenges to study AI systems
1) Direct impact challenge: Occurs due to people being subject to a specific AI system. This is also known as systems consideration. For example - privacy concerns.
2) Indirect impact challenge: Occurs due to the overall deployment of AI solutions in society. This is also known as societal consideration. For example loss of jobs due to the development of AI.
The objective of the study
Establish ‘Principles for Responsible AI’
Identify possible policy and recommendations on its regulation.
Enforce guidelines and incentive mechanisms for Responsible AI.
Study of systems consideration
Some of the issues with systems consideration are given as follows:
The following is a table consisting of various laws that govern AI in different countries:
It is observed that India does not have any guidelines or standards that could be applied to AI. Although there are some privacy laws in place, there is a need for AI-specific laws to be formulated.
Study of societal consideration
Impact of technology and innovations in the job landscape is not new. Especially the manufacturing and IT sector is particularly impacted by the growth of technology. It is to be considered that with the rise in AI, several routine jobs may be undertaken by an AI system.
In the near future, job profiling could be driven by data collection and interpretation.
Profiling by AI could also be subject to hidden propaganda and may result in social disharmony. For examples: In Myanmar, online platforms were used to spread hate speech and fake news was targeted against a particular community.
Identification of propaganda and hate speech is less advanced when dealing with posts in local languages. Research efforts must be dedicated to improving technology advancements in these areas.
Principles
The following principles to develop effective AI system:
Principle of Safety and Reliability
Principle of Equality
Principle of Inclusivity and Non-discrimination
Principle of Privacy and security
Principle of Transparency
Principle of Accountability
Principle of protection and reinforcement of positive human values
These principles have not been explained further by Niti Aayog.
Enforcement of principles
The people managing these principles are experts from technology, sector, and legal/policy fields.
Principles are to be updated as per emerging cases and challenges.
Various bodies involved in setting standards and regulations for AI are to be guided by the Government.
The Government should develop sector specific guidelines such as for healthcare, finance, education etc. The existing guidelines are to be complied with.
The Government should also establish institution (public, private, research institute) specific enforcement.
Self-assessment guide
Analysing the problem by engaging experts, identifying errors in AI and developing a plan.
Collection of data by identifying relevant laws and documents that regulate AI.
Labelling data that might have a human bias.
Data should be processed in a manner that only relevant data is used and sensitive data is excluded.
AI system should be trained to ensure fairness.
The error rate and fairness of the AI system should be evaluated.
A feedback mechanism and grievance redressal mechanism should be made accessible to the users of AI system.
There should be constant risk assessment, fairness assessment and performance assessment of the AI system.
This working paper is under public consultation until 10th August 2020. Do give your opinions on this at <civis.vote> or email them to <annaroy@nic.in>
References
[1] The Rise Of AI: How Did This Happen?, Ideal, Olivia Folick, available at <https://ideal.com/rise-of-ai/>, last seen on 01/08/2020 [2] https://www.statista.com/statistics/607960/worldwide-artificial-intelligence-market-growth/ [3] Working Document: Towards Responsible #AIforAll, Niti Aayog, available at < https://niti.gov.in/sites/default/files/2020-07/Responsible-AI.pdf>, last seen on 01/08/2020. [4] Ibid.
Bình luận