Legal Frameworks for AI Discrimination: Global Practices and India’s Policy Landscape | Part II
- Mihir Nigam
- Aug 27, 2024
- 16 min read

Part II | Principle of Non-Discrimination in AI Governance
This part of the article examines global legal approaches to AI discrimination and contrasts them with India's evolving tech policy. It reviews international frameworks and regional regulations, highlighting diverse approaches. The analysis focuses on India’s tech-policy landscape, including the Digital India Act, 2023, and TRAI’s recommendations. The article argues for comprehensive AI legislation, emphasizing the need for specialized regulations to effectively address problem of discrimination and bias in AI systems.
*Mihir Nigam
INTRODUCTION
The main argument against a special legislation for regulation of artificial intelligence is based on the premise that existing horizonal laws, like consumer law, data protection law, competition law, and labour laws, are already governing the various verticals of the market, and will also be relevant in application of artificial intelligence systems, and goods and services utilizing such system.[1] This argument, stems from the reasoning that a separate AI legislation is only going to hinder innovation, with stringent rules and chaotic application of law, possibly creating a landscape in which beneficial applications of the AI systems might need to be discontinued.
Notwithstanding the intent of the horizontal laws, to evaluate the merit of this argument, the efficacy of the horizontal laws in addressing discrimination warrants thorough scrutiny.
Understanding the Limitations of Existing Legal Framework in Regulating AI Discrimination
In the context of non-discrimination, in India, the legal landscape is scattered and outdated, because of which there has been a long-standing demand for a comprehensive anti-discrimination law.[2] The Constitution of India provides for both – the right against discrimination and the protective discrimination[3] – under Article 14 to 17. Furthermore, i) the Scheduled Caste and Scheduled Tribe (Prevention of Atrocities) Act, 1989; ii) Protection of Civil Rights Act, 1955; iii) the Transgender Persons (Protection of Rights) Act, 2019; contains many provisions of punishment for those enforcing social disabilities, refusing to admit people into public institutions, and denying access to goods and services on the basis of caste, sex, or gender.
However, the constitutional right against discrimination is primarily focused on the relationship between the state and its citizens. Consistent judicial rulings have established that writs—excluding the writ of habeas corpus—are mainly enforceable against the state.[4] That same can be seen from the case of Dr. Anand Gupta v. Rajghat Education Centre and Others,[5] where it was held that “the writ petition is not maintainable, as it has been filed against a private body, namely, Rajghat Education Centre, Varanasi………Ordinarily, no writ lies against a private body except a writ of habeas corpus.”
The big question that now rises is: Whether a person has any remedy or legal recourse in scenarios, such as those explored in Part I, where an AI system used by a private body (not under the ambit of state under Article 12) systematically discriminates against candidates from a specific category?
The answer to which is, obviously a “No.” As unlike the Indian Constitution, which prohibits the state from discriminating against any citizen based on religion, race, caste, sex, place of birth, or any combination thereof, there is no comparable comprehensive legislation that applies to the private sector.[6] The hiring and firing practices of private firms and companies are predominantly governed by their internal bye-laws and agreements. However, it is advisable for these organizations to implement a comprehensive anti-discrimination policy to effectively address potential biases and discriminatory practices.
A study by Mamgain, confirms the fact that employers practice subtle forms of discrimination among workers during the recruitment process. Employers are more interested in factors like “family background,” “employee referrals,” “communication, and language skills of the applicant,” which may not relate to the job and puts many job applicants from lower-class backgrounds at a decided disadvantage.[7]
While discrimination in hiring is not directly regulated by law, some legislations exist that protect against workplace discrimination in the private sector once employment has commenced. Specifically, there are laws against harassment and discriminatory practices in respect of women, including those who are pregnant or on any form of maternity leave, those with disabilities, transgender persons, those with HIV and AIDS, and a special category of non-managerial employees (known as “workmen” under the Indian industrial relations laws).[8] These equalities are cumulative in nature and relate to what is known as “Protected Criteria.”
These criteria are considered “protected” because they are legally recognized as essential for ensuring fair treatment and equal opportunities. Laws are established to prevent individuals from being disadvantaged or treated unfairly based on these protected criteria. Consider an example where an employer implements an AI profiling and evaluation system for promotional decisions, and that system is trained on a biased dataset which results in unfairly excluding individuals in the “protected criteria” from consideration, the remedy for such discrimination may be found under relevant discrimination laws. This is because the issue pertains to the unfair treatment of individuals based on their inclusion in a “protected class.”
A study by Mamgain, confirms the fact that employers practice subtle forms of discrimination among workers during the recruitment process. Employers are more interested in factors like “family background,” “employee referrals,” “communication, and language skills of the applicant,” which may not relate to the job and puts many job applicants from lower-class backgrounds at a decided disadvantage.
Addressing workplace discrimination involves more than merely adhering to minimum legal standards and focusing on “protected criteria.” The fact that measures to prevent discrimination are largely left to the employer’s discretion can be counterproductive, especially given that private sector employees often lack statutory rights and specific legal remedies for discrimination based on factors such as religion, race, caste, or community.[9] Without robust legal frameworks mandating comprehensive anti-discrimination practices, employees in the private sector may find it challenging to seek redress for unfair treatment. Therefore, relying solely on employer discretion may not suffice to ensure a fair and inclusive workplace, highlighting the need for more stringent and uniform regulatory measures.
Hence, in conclusion, as far as the applicability of various horizonal laws to the AI systems are considered, there lie few problems –
1) In absence of a national anti-discrimination law, the horizontal laws may not be effective in adequately addressing the problem of discrimination in public as well as private sector.
2) Even if the anti-discrimination law is present, when such discrimination is imbued in an artificial intelligence system, it then becomes much more difficult to determine whether discrimination has occurred. The burden and difficulty of proving discrimination in that case would thus fall on the victim, leaving him/her without any effective remedy.
Therefore, a special piece of legislations for regulating the AI technologies is required. Such legislations should include principles of equality and non-discrimination and should protect rights such as the right to not be subjected to decisions made by an autonomous decision-making system. In addition, legislation should provide an effective mechanism for grievance redressal for the prompt and sufficient redressal of incidents of discrimination.
The current horizontal legal frameworks may prove insufficient in addressing the discrimination perpetuated by AI systems. Consequently, there is a pressing need for a principles and rule-based regulatory approach, particularly in contexts like India, where existing legal structures are failing to adequately address discrimination within the private sector.
Bias in AI has recently become an extremely concerning issue. The cost of addressing biases after systems are deployed is not only economically unfeasible but often impossible. If we cannot reliably ensure that bias is excluded from the data pipeline, then the potential harm caused by biased data is compounded through its interaction with the design choices present in the model. Therefore, special legislation addressing the possibility of design bias in models, along with efforts to manage its influence, is crucial for reducing potential harm.
The stance coming from industry is that the issue of discrimination in AI systems is intricate issue, and it may be more appropriate to leave the resolution of such details to sector-specific regulators in the initial stages.[10]
From General to Specialized AI in Ensuring Impartiality for High-Stakes Decision-Making Systems
We need to accept that a good proportion of the literature before us in the form of history, political analysis, news reports, online information, articles, and opinions is generally written from a prejudiced/biased point of view. We simply cannot afford to ignore these resources in the name of non-discrimination. At least, for general-purpose conversational agents, design, and development strategies for mitigating bias and strengthening impartiality in outputs presented to users typically offset the biases trained into a system from possibly biased data.
However, the case should be different when one considers specialized artificial intelligence systems, mainly those used in decision-making contexts that have significant consequences for individuals—like AI systems to assist in judicial decisions, such as bail determinations, sentencing recommendations, or parole decisions; systems that assist in diagnosing medical conditions or recommending treatment plans; systems for selecting applicants for college admissions; systems for hiring and promotion in company; system for credit scoring and loan approval. We must be very cautious of such scenarios. Sources of prejudice or bias should not be used in the training datasets for such specialized systems. This involves a scrutiny of sources of data, the processes for data annotation, and training models in such a way that the parameters of fairness and objectivity are fully met. Therefore, the threshold for meeting standards of non-discrimination should be high in such cases.
How Different Countries are Adopting Non-Discrimination Standards in AI Regulations
The OECD has been a major player in this effect, providing for intergovernmental standards on the artificial intelligence. Five value-based principles were adopted by OECD in May 22, 2019, under the Section 1 of “Recommendation of the Council on Artificial Intelligence”,[11] to promote the innovative and trustworthy use of AI, respecting the human rights and democratic values.[12]
The AI Principle 1.2, titled as “Respect for the rule of law, human rights and democratic values, including fairness and privacy” provides that –
“a) AI actors[13] should respect the rule of law, human rights, democratic and human-centred values throughout the AI system lifecycle. These include non-discrimination and equality, freedom, dignity, autonomy of individuals, privacy and data protection, diversity, fairness, social justice, and internationally recognised labour right………….
b) To this end, AI actors should implement mechanisms and safeguards, such as capacity for human agency and oversight, including to address risks arising from uses outside of intended purpose, intentional misuse, or unintentional misuse in a manner appropriate to the context and consistent with the state of the art.”
As far as the incorporation of this principle goes under the national legislations, the countries have shown two regulatory approaches in addressing the discrimination under their proposed or legislated AI regulations –
Countries with Explicit Definitions of Discrimination: In first category, the countries have clear and specific legal definitions of discrimination within their regulations. This includes countries like Brazil, Canada, EU, and South Korea.[14]
Countries Regulating Discrimination Without Explicit Definitions: These countries aim to address discriminatory practices but do so without providing a precise legal definition of discrimination. Their approaches might be more general or implicit. This includes countries like Argentina, China, and US.
The regulatory landscape for artificial intelligence (“AI”) systems exhibits considerable variance depending on regional approaches and specific legislative frameworks.[15] For instance, Argentina, South Korea, and the United States have adopted broad regulatory measures that encompass all AI systems, reflecting a comprehensive stance on the governance of AI technologies. In contrast, Canada and the European Union have tailored their regulatory efforts to focus predominantly on high-risk AI systems, thereby prioritizing oversight based on the potential impact and risks associated with these technologies. Meanwhile, China’s regulations are notably specific, targeting particular technological domains such as generative AI and recommendation algorithms. This nuanced approach suggests a targeted strategy aimed at addressing the most immediate and impactful aspects of AI development.
Brazil presents a hybrid model, incorporating both general provisions applicable to all AI systems and specific regulations addressing high-risk and biometric systems. This dual approach shows Brazil’s attempt to balance broad regulatory oversight with targeted measures for high-risk categories. The diversity in regulatory strategies underscores the complexity and varied priorities across jurisdictions in managing the risks and opportunities presented by AI technologies.
India’s AI Governance Strategy and integration of Non-Discrimination Principle
India seeks to advance its data and technological governance through the proposed Digital India Act, 2023, which is poised to supplant the IT Act of 2000. The Ministry of Electronics and Information Technology (MeitY) asserted that the bill would bring about a transformative change in the regulatory framework governing emerging technologies such as artificial intelligence and machine learning. However, the IT & Telecom Minister Ashwini Vaishnaw, in a communication addressed to the Parliament on April 5 2023, took a diverging stance, and conveyed that the government does not intend to impose regulations on the flourishing growth of AI in India.[16]
Looking at the pace of AI development and emerging risks, the government again changed its stance from “not willing to regulate” to “we are working on regulation.”[17] But what changed in a gap of about one year? Over the past year, AI technology has advanced significantly, revealing new complexities and potential harms. This evolution has heightened the anticipation of AI becoming a transformative force in various sectors. Concurrently, major economic powers have proposed or enacted specific AI regulations, prompting India to reconsider its initial stance. The increasing sophistication of AI, global trend toward targeted legislation and increased geo-political significance prompted a need for a robust regulatory framework to address both opportunities and challenges.
There seems to be a growing recognition of the need to establish AI principles tailored to India’s context. The intention is to strategically align India’s AI policies with international standards, ensuring they support and facilitate, rather than hinder, business and technology transfer. By developing specific AI guidelines and principles, the government is hoping to harmonize its regulatory approach with global norms, thus fostering an environment that supports innovation while addressing potential risks.
Over the past year, AI technology has advanced significantly, revealing new complexities and potential harms. This evolution has heightened the anticipation of AI becoming a transformative force in various sectors. Concurrently, major economic powers have proposed or enacted specific AI regulations, prompting India to reconsider its initial stance. The increasing sophistication of AI, global trend toward targeted legislation and increased geo-political significance prompted a need for a robust regulatory framework to address both opportunities and challenges.
In the context of non-discrimination and ethical governance of AI, three significant documents introduced in recent years have had a profound impact on the tech-policy landscape in India:
I. Responsible AI Approach Document (Part II)
Niti Ayog introduced the Responsible AI: Approach Document for India (Part II) in August 2021.[18] The approach document emphasized that identifying AI Governance principles is the essential first step, that needs to be complemented by the mechanisms required for adherence to these principles towards ensuring a responsible AI ecosystem.[19] Further, it went on to enlisting certain important principles of responsible AI in India (“RAI”) that included - i) safety and reliability, (iii) equality, (iii) inclusivity and non-discrimination, (iv) privacy and security, (v) transparency, (vi) accountability and protection (vii) and reinforcement of positive human value. Out of which, principle (i) and (ii) are important for our discussion:
(a) The Principle of Equality provides that “the systems must treat individuals under the same circumstances relevant to the decision equally;” and
(b) The Principle of Inclusivity and Non-discrimination provides that “AI systems should not deny opportunity to a qualified person on the basis of their identity. It should not deepen the harmful historic and social divisions based on religion, race, caste, sex, descent, place of birth or residence in matters of education, employment, access to public spaces, etc. It should also strive to ensure that an unfair exclusion of services or benefits does not happen.”
According the Niti Ayog, these principles of RAI flow directly from the Constitution of India and all laws enacted thereunder and are also compatible with the principles identified by international bodies such as the Global Partnership on Artificial Intelligence (GPAI).[20]
The document also acknowledged the impracticality of imposing uniform, prescriptive guidelines for ensuring adherence to ethical principles in AI, emphasizing instead the necessity of robust governance mechanisms.[21] Such mechanisms are pivotal in fostering the development of AI systems that are reliable, predictable, and trustworthy. It argues that responsible AI considerations must be seamlessly integrated into every stage of the AI lifecycle rather than treated as a one-time compliance exercise. This approach ensures that ethical standards are continuously upheld, accommodating the evolving nature of AI technologies and their diverse applications, thus safeguarding their alignment with foundational ethical and legal norms.
II. Proposed Digital India Act, 2023
The Digital India Act, 2023 (“DIA”), was proposed by MeitY on March 9 2023.[22] The aims, objectives, and specifics about the act were made available to public via presentation slides under Digital India Dialogues.
The DIA, set to supplant the IT Act which has been operational for approximately 24 years, is designed with several objectives. Primarily, it aims to ensure that the Indian internet remains open, safe, and trustworthy, while fostering accountability. Additionally, the Act seeks to accelerate the growth of the innovation and technology ecosystem by managing the complexities arising from the internet’s rapid expansion and the proliferation of diverse intermediaries. It will establish a framework to expedite the digitalization of government processes, thereby strengthening democracy and governance through enhanced Government-to-Citizen (“G2C”) interactions. Furthermore, the Act is intended to safeguard citizens’ rights, address emerging technologies and associated risks, and ensure adaptability to future advancements, thereby maintaining its relevance in an evolving digital landscape.[23]
In reference to the development of emerging technologies, the DIA will recognise digital user rights (“DURs”), including the Right to be Forgotten, the Right to Secure Electronic Means, the Right to Redressal, the Right to Digital Inheritance, the Right Against Discrimination, and the Right Against Automated Decision-Making, thereby ensuring comprehensive protection and empowerment of individuals in the digital sphere.
As of now, specific rules detailing the regulation of these technologies have not been released or opened for public comment and discussion. However, it is assured that the DIA will encompass provisions to combat discrimination and provide for remedy in case of automated decision making, as these rights are going to be explicitly enshrined in the DURs. The inclusion of such provisions underscores the Act’s commitment to protecting individuals from discriminatory practices in the digital realm, guaranteeing that emerging technologies and AI are regulated in a manner that upholds fairness and equality.
III. TRAI’s Recommendations on Governance Of AI
On 20th July 2023, the Telecom Regulatory Authority of India (“TRAI”) released its recommendations on “Leveraging Artificial Intelligence and Big Data in Telecommunication Sector”.[24] This is particularly significant given that the recommendation is not coming from a policy think tank, non-governmental organization, or academic institution, but from a sectoral regulator. While the title might suggest that these recommendations are confined to a specific sector (i.e., the use of AI in telecommunications sector), they actually encompass broader implications. In reality, the recommendations go beyond the telecom sector and also address the national policy and the governance of AI.
The extensive 138-page document delves into various aspects of AI in India, including its proliferation, transformative potential, definition, emerging risks, and the facilitation of data availability. Broadly, the document underscores the urgent need for a regulatory framework to promote the responsible development of AI across sectors. It advocates for the establishment of an independent statutory body, proposed to be named the Artificial Intelligence and Data Authority of India (“AIDAI”). This authority would be tasked with overseeing the development and regulation of AI use cases in India, ensuring that the framework addresses sector-specific nuances while maintaining a unified approach to AI governance.
In the document, TRAI acknowledged bias as one of the most pressing challenges in the domain of AI, given its potential to adversely affect both individuals and society.[25] They argued that it is crucial to identify and mitigate bias in AI systems to ensure their fairness, transparency, and accountability. Achieving this requires a multidisciplinary approach that extends beyond technical solutions to include social, ethical, and legal considerations. This holistic strategy is necessary to address the complexities of bias and to develop AI systems that uphold equitable standards across various contexts.
The Way Forward
The current horizontal legal frameworks may prove insufficient in addressing the discrimination perpetuated by AI systems. Consequently, there is a pressing need for a principles and rule-based regulatory approach, particularly in contexts like India, where existing legal structures are failing to adequately address discrimination within the private sector. Incorporating a non-discrimination principle into a special AI legislation, even without a precise definition, may be advantageous for India, given the ongoing ambiguity regarding the various manifestations of AI-induced discrimination. It is essential to impose obligations on AI developers to adopt a responsible AI-by-design approach, ensuring adherence to principles of equality and non-discrimination. This regulatory evolution aims to foster a more equitable and accountable AI landscape.
India is still in the early stages of developing comprehensive legislation specifically focused on AI, with formal discussions having only just begun. To navigate this complex landscape, it is imperative that extensive studies be funded to examine the impact of AI across various sectors, with consultations involving major stakeholders.
Regarding the issue of self-regulation, I align with Sam Altman’s perspective. During a fireside chat at the Indraprastha Institute of Information Technology he stated, “Self-regulation is important and is something that we want to offer, but I don't think that the world should be left entirely in the hands of the companies either, given what we think is the power of this technology.”[26] Given the profound implications of AI technology, it is prudent to establish general regulations and guidelines governing the development, deployment, and use of AI. Additionally, sectoral regulators should be tasked with formulating sector-specific rules that align with global standards while addressing India’s unique requirements and needs.
In all these considerations, it is crucial not to overlook that innovation must be safeguarded and not impeded under any circumstances. The focus should be on fostering an environment where technological advancements are encouraged, but also accompanied by robust measures for risk management and ethical oversight. This dual focus will help in harnessing the transformative potential of AI while mitigating its adverse effects. Capacity building should be central to India’s Tech Policy, emphasizing the need to raise awareness about both the potential harms and the beneficial use cases of AI technology. The overarching goal should be to enhance the quality of life by leveraging AI advancements responsibly and effectively.
*Mihir Nigam is a 5th-year Intellectual Property Law (Hons.) student at National Law University, Jodhpur. He serves as the Team Lead for AI & Data Protection at the Centre for Research in Governance, Institutions, and Public Policy at the same university.
References:
[1] Williamson, Brian. Aligning regulation and AI, Communication Chambers, an independent report funded by google (July 2024)
[2] Y., Hariharan. Anti-Discrimination Laws in India: A Need for Reform, 5 Issue 1 Indian J.L. & Legal Rsch. 1 (2023). [ 8 pages, 1 to 8]
[3] Protective discrimination is when the government enacts policies giving additional privileges to sectors of society that are depressed and disadvantaged. What is called affirmative action in the United States bears the same concept, even though the forms taken by this policy would differ significantly from country to country. For example, Article 15 and 16 provides for protective discrimination by aiming to protect women, children, scheduled castes, scheduled tribes, backward castes, and other economically weaker section of the society.
[4] Army School v. Shilpi Paul., 2005 (1) ESC 342
[5] Dr Anand Gupta v. Rajghat Education Centre and Ors, 2003 (1) A WC 503
[6] Khaitan & Co, ‘Workplace Discrimination in the Private Sector’ (Khaitan & Co) <https://compass.khaitanco.com/workplace-discrimination-in-the-private-sector> accessed 9 August 2024.
[7] Rajendra P. Mamgain, Formal Labour Market in Urban India: Job Search, Hiring Practices and Discrimination, 2019, New Delhi: SAGE Publications, xxvii+313 p 265.
[8] Anti-Discrimination Laws in the Indian Private Sector: A Toothless Tiger? <https://compass.khaitanco.com/workplace-discrimination-in-the-private-sector> accessed 9 August 2024.
[9] Ibid.
[10] Supra, note 1.
[11] Recommendation of the Council on Artificial Intelligence, OECD Legal/0449, Adopted on: 22/05/2019 and amended on: 03/05/2024
[12] AI Principles (OECD). https://www.oecd.org/en/topics/sub-issues/ai-principles.html
[13] As per the recommendation, “AI actors are those who play an active role in the AI system lifecycle, including organisations and individuals that deploy or operate AI.”
[14] Data Policy Alert, The Anatomy of AI Rules: A systematic comparative analysis of AI rules across the globe, in collaboration with Law and Economics Foundation St. Gallen. https://digitalpolicyalert.org/ai-rules/the-anatomy-of-AI-rules
[15] Ibid.
[16] No Regulations for Artificial Intelligence in India, Business Today (Apr. 6, 2023), https://www.businesstoday.in/technology/news/story/no-regulations-for-artificial-intelligence-in-india-it-minister-ashwini-vaishnaw-376298-2023-04-06.
[17] Government Working on Regulation for AI, Economic Times (Aug. 23, 2024), https://economictimes.indiatimes.com/tech/artificial-intelligence/government-working-on-regulation-for-ai-it-minister-ashwini-vaishnaw/articleshow/111454670.cms?from=mdr.
[18] NITI Aayog, Responsible Ai: Approach Document for India (Part 2) - Operationalizing Principles for Responsible AI (Aug. 2021), https://www.niti.gov.in/sites/default/files/2021-08/Part2-Responsible-AI-12082021.pdf.
[19] Ibid, pg. 4.
[20] Ibid, pg. 4.
[21] Ibid, pg. 8.
[22] Presentation by MeitY on Proposed Digital India Act, 2023, Digital India Dialogues (Mar. 9, 2023), Bengaluru, Karnataka, https://www.meity.gov.in/writereaddata/files/DIA_Presentation%2009.03.2023%20Final.pdf.
[23] Ibid, pg. 8.
[24] Telecom Regulatory Authority of India, Recommendations on Leveraging Artificial Intelligence and Big Data in Telecommunication Sector (July 20, 2023), https://www.trai.gov.in/sites/default/files/Recommendation_20072023_0.pdf.
[25] Ibid, pg. 34.
[26] Self-Regulation Important but World Should Not Be Left in Hands of Companies, Times of India, May 30, 2024, https://timesofindia.indiatimes.com/business/india-business/self-regulation-important-but-world-should-not-be-left-in-hands-of-companies-sam-altman/articleshow/100856005.cms.




Comments