top of page

Search Results

8 results found with an empty search

  • CCPA’s Guidelines for Prevention and Regulation of Dark Patterns, 2023 & Advisory for Self-Audit

    By embracing Transparency by Design and actively rooting out the 13 specified dark patterns, your business not only ensures compliance with Indian law but also gains a critical advantage in the global digital economy:  Trust  - A Compliance Primer by Mihir Nigam

  • DPDPA 2025 Handbook: Data Privacy Compliance for Indian Businesses, Institutions, and Professionals

    The DPDPA 2025 Handbook with over 100 pages and 15+ actionable frameworks to guide Indian businesses in navigating digital privacy compliance effectively. Introducing the DPDPA Compliance Handbook India’s Digital Personal Data Protection Act (DPDPA) 2023 marks a turning point in how organizations collect, store, and use personal data. With new obligations and higher stakes, compliance can feel overwhelming—but it doesn’t have to. We’re excited to announce our DPDPA Compliance Handbook , designed to help businesses, institutions, and professionals navigate the law with clarity and confidence. Inside, you’ll find practical guidance, a unified compliance framework, and actionable tools to turn legal requirements into real-world practice. Whether you’re a startup or an established enterprise, this handbook is your essential companion for building trust, strengthening privacy practices, and unlocking competitive advantage. 🔗 Access it now  – click on “DPDPA Handbook”  in the menu bar or Click Here .

  • Data Borders in a Digital World: Comparing Indian and European Approaches to Cross-Border Personal Data Transfers

    “Data is now regarded as a new oil. In an interconnected global economy, it crosses borders with the same frequency as international trade. The DPDPA’s approach to cross-border data transfers represents India’s strategic positioning in the global data economy, balancing openness to international business with protection of citizen privacy and national interests.” Abstract The exponential growth of digital economies has necessitated the seamless flow of data across national boundaries, creating unprecedented challenges for regulatory frameworks worldwide. This paper examines the evolving landscape of cross-border data transfer regulations, with particular emphasis on India’s journey from the B.N. Srikrishna Committee recommendations to the Digital Personal Data Protection Act, 2023, and its comparison with the European Union’s General Data Protection Regulation, 2018. Through a comprehensive analysis of regulatory approaches, this paper explores the tension between data sovereignty, economic imperatives, and privacy protection in an increasingly interconnected digital ecosystem. 1. Introduction: Understanding Cross-Border Data Transfer Cross-border data transfer refers to the transmission of personal data from one jurisdiction to another, whether through active transfer, remote access, or storage in foreign territories [1] . This seemingly technical concept embodies profound questions about sovereignty, privacy, economic competitiveness, and technological governance in an interconnected world. In the digital age, this concept has evolved beyond mere geographical movement to include complex scenarios involving cloud computing, distributed processing, and multi-jurisdictional data access. [2] The phenomenon encompasses various modalities of data movement. Active transfers involve deliberate transmission of datasets across borders, such as when multinational corporations consolidate global employee records at headquarters. Passive transfers occur through cloud storage arrangements where data automatically replicates across geographically distributed servers. Remote access scenarios enable foreign entities to view or process data without physical transfer, raising complex questions about the locus of processing. The economic significance of cross-border data flows cannot be overstated. McKinsey Global Institute estimates that global data flows contribute more to GDP growth than traditional goods trade, with cross-border bandwidth usage growing 45-fold between 2005 and 2020. [3]  For India specifically, the information technology and business process management sector, heavily dependent on cross-border data flows, contributes approximately 7.5% to GDP and employs over 4.5 million people. [4]   2. The Indian Regulatory Journey: Evolution of Cross-Border Data Transfer Regulation. A. Pre-Legislative Landscape Before comprehensive data protection legislation, India’s approach to cross-border data transfer was fragmented across sectoral regulations and contractual frameworks. The Information Technology Act, 2000, provided minimal guidance, with Section 43A and the associated Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011, establishing basic requirements. [5] Rule 7 of the 2011 Rules permitted transfer of sensitive personal data to entities ensuring the “same level of data protection” as mandated under Indian law, introducing a rudimentary adequacy concept. [6]  However, this framework proved inadequate for addressing sophisticated data governance challenges, lacking enforcement mechanisms and comprehensive coverage. Sectoral regulations filled some gaps. The Reserve Bank of India’s data localization mandate for payment system operators, introduced in 2018, required storage of all payment system data exclusively in India. [7]  The Insurance Regulatory and Development Authority mandated that insurers maintain policyholder data within India while permitting processing abroad under specific conditions. [8] B. The B.N. Srikrishna Committee: Foundational Principles The Committee of Experts under Justice B.N. Srikrishna, constituted in 2017, undertook comprehensive examination of data protection needs, including cross-border transfer mechanisms. The Committee’s white paper identified key considerations: ensuring continued protection for Indian citizens’ data abroad, maintaining law enforcement access, and preserving India’s competitive advantage in global services trade. [9]  The Committee analyzed various international models before proposing a nuanced approach. Rather than absolute data localization, it recommended conditional cross-border transfers based on adequacy determinations, standard contractual clauses, and intra-group schemes. [10] Critical personal data, however, would require mirroring in India, ensuring governmental access when necessary. [11] The Committee’s draft Personal Data Protection Bill, 2018, embodied these principles through Chapter VII, establishing a comprehensive framework for cross-border transfers. [12]  Section 40 proposed explicit consent as the baseline requirement, supplemented by adequacy decisions, approved contractual arrangements, and necessity-based transfers. [13] Significantly, the Committee rejected arguments for unrestricted data flows, noting that “the primary value from data originates from its collection in India... it is therefore vital that Indians are able to access and harness this value”. [14]  This perspective influenced subsequent legislative developments, embedding data sovereignty concerns within the regulatory framework. C. Legislative Evolution: Multiple Iterations The Personal Data Protection Bill, 2019, introduced in Parliament, modified the Srikrishna Committee’s recommendations substantially. While retaining the basic architecture of conditional transfers, it expanded data localization requirements. Section 33 mandated that sensitive personal data could be transferred abroad only after explicit consent, while critical personal data must be processed exclusively in India. [15] The Bill’s provisions sparked intense debate. Industry stakeholders argued that stringent localization requirements would undermine India’s position as a global technology services hub. Civil society organizations expressed concerns about governmental access to localized data without adequate safeguards. [16]  International trade partners, particularly the United States and European Union, raised concerns about potential trade barrier implications. [17] The Joint Parliamentary Committee’s examination, spanning nearly two years, resulted in significant modifications. The Committee’s report, submitted in December 2021, recommended expanding exemptions for government agencies while maintaining strict cross-border transfer restrictions. These recommendations proved controversial, with dissent notes from multiple committee members highlighting concerns about surveillance and economic impact. D. The Digital Personal Data Protection Act, 2023 The Digital Personal Data Protection Act (DPDPA), 2023, represents a shift from earlier proposals. Section 16 adopts a streamlined approach to cross-border transfers, departing from the complex architecture of previous iterations. Section 16(1) creates a crucial exception: “The Central Government may, by notification, restrict the transfer of personal data by a Data Fiduciary for processing to such country or territory outside India as may be so notified”. [18]  This blacklist provision enables targeted restrictions based on national security, diplomatic relations, or strategic considerations. Rather than imposing blanket localization requirements, it establishes a flexible framework allowing calibrated responses to evolving geopolitical and economic circumstances. [19] E. Sectoral Regulations Under Section 16(2) Section 16(2)’s reference to sectoral restrictions acknowledges existing data localization requirements across various domains. These sector-specific regulations continue operating alongside the DPDPA’s general framework, creating a layered governance structure. Financial Services Sector : The Reserve Bank of India’s April 2018 directive mandates that all payment system providers store entire data relating to payment systems only in India. [20]  This encompasses end-to-end transaction details, information collected, carried, or processed as part of the payment message or instruction. The directive permits processing abroad if data is stored simultaneously in India and deleted from foreign systems within prescribed timeframes. The rationale emphasizes unfettered supervisory access, improved monitoring capabilities, and enhanced consumer protection. [21]  Implementation has required significant infrastructure investment by global payment companies, with some estimating compliance costs exceeding $100 million for major providers. [22] Insurance Sector : The Insurance Regulatory and Development Authority of India (IRDAI) maintains nuanced requirements through its Outsourcing of Activities by Indian Insurers Regulations, 2017. [23]  While core policyholder data must remain in India, insurers may process data abroad under stringent conditions including regulatory approval, audit rights, and immediate repatriation capabilities. Telecom Sector : The Unified License Agreement mandates that telecommunication service providers maintain subscriber data, call detail records, and network configuration data within India. [24]  The National Security Directive on Telecommunication Sector further prohibits transfer of certain categories of data outside India without explicit government approval. [25] Healthcare Sector : The Digital Information Security in Healthcare Act (DISHA), though not yet enacted, proposes comprehensive localization requirements for electronic health records. [26] Current guidelines under the Clinical Establishments Act require maintenance of patient records within India, effectively creating de facto localization. [27] Government and Public Sector : The MeitY’s empanelment guidelines for cloud service providers serving government entities mandate data localization for government departments and public sector undertakings. [28]  This requirement extends to both storage and processing, with limited exceptions for non-sensitive data. 3. The European Approach to Cross-Border Data Transfers A. Historical Evolution The European approach to cross-border data transfer emerged from decades of regulatory evolution, beginning with the Council of Europe’s Convention 108 in 1981. The Data Protection Directive 95/46/EC established the foundational principle that personal data could only be transferred to third countries ensuring “adequate” protection. [29] The adequacy standard proved contentious from inception. The Article 29 Working Party’s interpretation required third countries to provide protection “essentially equivalent” to European standards, a threshold few non-European jurisdictions could meet. [30] By 2018, only twelve jurisdictions had received adequacy determinations, creating significant friction for global data flows. [31] The Safe Harbour framework with the United States, established in 2000, attempted to bridge divergent approaches through self-certification mechanisms. However, the European Court of Justice’s landmark Schrems I decision invalidated Safe Harbour, finding that mass surveillance programs undermined adequate protection guarantees. [32] B. GDPR’s Multi-Layered Transfer Mechanisms The General Data Protection Regulation, effective from May 2018, refined and expanded transfer mechanisms through Chapter V. [33] Article 45 maintains adequacy decisions as the gold standard, requiring the European Commission to assess third country laws, international commitments, and enforcement mechanisms. The adequacy assessment examines multiple factors: rule of law, respect for human rights, relevant legislation, supervisory authorities’ independence and effectiveness, and international commitments. The European Commission must review adequacy decisions at least every four years, ensuring continued protection. [34] Article 46 provides alternative transfer mechanisms through appropriate safeguards. Standard Contractual Clauses (SCCs), adopted by the Commission, create binding obligations between data exporters and importers. [35] The 2021 SCCs incorporate Schrems II requirements, mandating transfer impact assessments and supplementary measures where destination country laws may undermine protection. [36] Binding Corporate Rules (BCRs) enable multinational organizations to transfer data within corporate groups following supervisory authority approval. BCRs must demonstrate comprehensive data protection policies, enforceability, and adequate resources for compliance. [37] The approval process, though rigorous, provides legal certainty for complex corporate structures. Article 47 establishes detailed BCR requirements including binding nature, complaint handling procedures, cooperation with supervisory authorities, and liability mechanisms. Organizations must demonstrate that BCRs are legally binding and enforceable against all group members, including employees. C. Derogations and Exceptional Circumstances Article 49 delineates specific derogations for exceptional transfers absent adequacy decisions or appropriate safeguards. Explicit consent requires clear information about transfer risks and cannot be relied upon for repeated, mass, or structural transfers. [38] Contractual necessity permits transfers essential for contract performance between the data subject and controller, or contracts concluded in the data subject’s interest. [39] This derogation applies narrowly to genuine transfers necessary for contracts, not merely useful or convenient ones. Another   important derogation would be on public interest grounds, recognized in Union or Member State law, may justify transfers. [40] The European Data Protection Board emphasizes that public interest must be important enough to override individual data protection rights in specific circumstances. D. The Schrems II Impact The Court of Justice’s Schrems II decision fundamentally altered the cross-border transfer landscape. [41] Invalidating the EU-U.S. Privacy Shield, the Court held that U.S. surveillance laws, particularly FISA Section 702 and Executive Order 12333, prevented adequate protection. [42] The Court mandated a case-by-case assessment of destination country laws, requiring data exporters to verify that transferred data receives protection essentially equivalent to European standards. Where destination country laws may impinge upon protection, organizations must implement supplementary measures or suspend transfers. [43] Post-Schrems II guidance from the European Data Protection Board identifies technical measures (encryption, pseudonymization), contractual measures (transparency obligations, audit rights), and organizational measures (internal policies, training) as potential supplements. [44] However, these measures must effectively prevent government access that exceeds what is necessary and proportionate in a democratic society. 4. Comparative Analysis: Divergent Approaches to Common Challenges A. Conceptual Frameworks The Indian and European approaches reflect fundamentally different conceptual starting points. Europe’s rights-based framework treats data protection as a fundamental right, necessitating stringent conditions for international transfers. India’s approach balances multiple objectives: privacy protection, economic development, digital sovereignty, and strategic autonomy. The GDPR’s “essentially equivalent” standard demands that third countries approximate European protection levels. India’s DPDPA adopts a more flexible standard, considering factors beyond pure data protection adequacy. This difference reflects varying constitutional traditions and economic priorities. GDPR’s Rights-Based Approach : The GDPR treats data protection as a fundamental right, requiring positive demonstration of adequate protection before transfers. This represents the typical paradigm of international data transfer provisions. [45] DPDP Act’s Sovereignty-Based Approach : India’s framework prioritizes governmental discretion and economic flexibility. The DPDP Act reverses the typical paradigm by presuming that transfers may occur without restrictions, unless the Government specifically restricts transfers to certain countries. [46] B. Enforcement and Remedies GDPR enforcement through independent supervisory authorities, backed by significant penalties (up to 4% of global annual turnover), creates strong compliance incentives [47] . The one-stop-shop mechanism streamlines enforcement for cross-border violations while maintaining local accessibility. India’s Data Protection Board, established under DPDPA, combines adjudicatory and advisory functions. Penalties, though substantial (up to ₹250 crores), are not proportioned to company turnover, potentially reducing deterrence for large multinationals. [48] European data subjects enjoy comprehensive rights, including access, rectification, erasure, and data portability, enforceable against foreign data importers through contractual mechanisms. DPDPA provides similar rights, but its extraterritorial enforcement remains uncertain because of the absence of detailed implementing regulations. The act also does not include rights such as ‘Data Portability’ and ‘Right Against Automated Decision Making’. C. Geopolitical Considerations Europe’s approach reflects its position as a regulatory superpower, leveraging market access to globalize its data protection standards. [49] The “Brussels Effect” has prompted worldwide adoption of GDPR-like provisions, establishing Europe as the de facto global standard-setter while India’s approach balances multiple imperatives: maintaining competitiveness in global services trade, asserting digital sovereignty, and managing relationships with major economic partners. The flexibility built into Section 16 enables diplomatic negotiations while preserving policy space. [50] Both jurisdictions face pressure from the United States to maintain open data flows, though through different mechanisms. Europe negotiates as a bloc with significant market power; India navigates bilateral pressures while building strategic partnerships. [51] 5.      Implementation Challenges and Emerging Issues A. Technical Implementation Organizations face significant technical challenges in implementing cross-border transfer requirements. Data mapping exercises must identify all international data flows, including subtle transfers through cloud services, analytics platforms, and support functions. [52] The European requirement for transfer impact assessments demands a sophisticated understanding of foreign surveillance laws, often requiring expensive legal opinions. [53] Organizations report spending millions on compliance infrastructure, with ongoing monitoring and documentation requirements. India’s notification-based system, while simpler, creates uncertainty during transition periods. Organizations must prepare for potential sudden restrictions on transfers to specific countries, requiring contingency planning and alternative processing arrangements. [54] B. Economic Implications Cross-border data transfer restrictions impose measurable economic costs. The European Centre for International Political Economy estimates that data localization requirements could reduce GDP by up to 1.7% in implementing countries. [55] For India, restrictions could particularly impact the $194 billion IT services industry. [56] Compliance costs disproportionately burden small and medium enterprises, lacking resources for complex legal and technical measures. This may accelerate market concentration as smaller players exit or consolidate with larger entities better equipped for compliance. However, data localization may generate domestic benefits including infrastructure investment, local employment, and reduced foreign exchange outflows for cloud services. India’s push for local data centres has attracted over $5 billion in investments from major technology companies. [57] C. Emerging Technologies Artificial intelligence and machine learning systems depend on vast, diverse datasets often requiring cross-border aggregation. Transfer restrictions may fragment datasets, reducing AI system effectiveness and potentially introducing biases. [58] Blockchain and distributed ledger technologies inherently involve cross-border data distribution, challenging traditional transfer frameworks. [59] Regulatory approaches must evolve to address decentralized architectures which will be mainstream in the future, where data location becomes fluid. Quantum computing’s potential to break current encryption standards threatens technical safeguards underpinning cross-border transfers. [60] Regulators and organizations must prepare for post-quantum cryptography transitions affecting international data flows. D. International Cooperation The absence of global data governance frameworks creates regulatory fragmentation, increasing compliance complexity and costs. Various initiatives attempt harmonization, including the OECD Privacy Framework, APEC Cross-Border Privacy Rules, and Global Cross-Border Privacy Rules system. Trade agreements increasingly incorporate data flow provisions, though often conflicting with data protection regulations. The Comprehensive and Progressive Agreement for Trans-Pacific Partnership prohibits data localization, potentially conflicting with domestic data protection laws. Mutual Legal Assistance Treaties and international agreements like the CLOUD Act create mechanisms for governmental data access across borders, raising sovereignty and privacy concerns. [61] Balancing law enforcement needs with privacy protection remains a persistent challenge. Conclusion The regulation of cross-border data transfers represents one of the defining challenges of the digital age. India’s journey from the comprehensive recommendations of the B.N. Srikrishna Committee to the enacted DPDP Act reflects a recalibration between data protection, economic imperatives, and sovereignty concerns. While the DPDP Act’s blacklist approach offers flexibility for businesses, the preservation of sectoral restrictions and the absence of detailed transfer mechanisms create continuing uncertainties. The contrast with the GDPR’s rights-based framework highlights the absence of a global consensus on data governance. As data flows become increasingly critical to economic competitiveness and innovation, the challenge lies in developing frameworks that protect individual privacy while enabling the benefits of the digital economy. India’s evolving approach, balancing liberalization with strategic control, may offer a model for other developing economies navigating similar tensions. The future of cross-border data transfer regulation will likely be shaped by technological advances, geopolitical realignments, and evolving conceptions of digital rights. Success will require not only robust domestic frameworks but also international cooperation to ensure that data protection does not become a barrier to legitimate economic and social benefits. As India implements the DPDP Act and potentially notifies restricted countries, its approach will be closely watched as a test case for alternative models of data governance in the Global South. Neither absolute data sovereignty nor unrestricted flows serve society's best interests. The path forward demands continued dialogue between stakeholders, adaptive regulatory frameworks that can respond to technological change, and a commitment to protecting fundamental rights while enabling innovation. Only through such balanced approaches can nations harness the benefits of the global digital economy while maintaining the trust and protection their citizens deserve. References: [1]  Christopher Kuner, Transborder Data Flows and Data Privacy Law  13-15 (Oxford University Press 2013) [2]  Lokke Moerel, Back to Basics: When Does EU Data Protection Law Apply?, 2 Int'l Data Priv. L. 92, 95–98 (2011). [3]  McKinsey Glob. Inst., Digital Globalization: The New Era of Global Flows 2–4 (2016). [4]  Nat'l Ass'n of Software & Servs. Cos., Strategic Review 2020: Navigating the Next  14–16 (2020). [5]  Information Technology Act, No. 21 of 2000, India Code, § 43A; Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011, Ministry of Communications & Information Technology (Apr. 11, 2011). [6]   Id.  at Rule 7. [7]  Reserve Bank of India, Storage of Payment System Data, RBI/2017-18/153 (Apr. 6, 2018). [8]  Insurance Regulatory. & Development Authority of India, Guidelines on Outsourcing of Activities by Insurance Companies, Circular No. IRDAI/Life/GDL/MISC/080/04/2017 (Apr. 5, 2017). [9]  Commission of Experts Under the Chairmanship of Justice B.N. Srikrishna, A Free and Fair Digital Economy: Protecting Privacy, Empowering Indians 70–75 (2018). [10]   Id.  at 88-92. [11]   Id.  at 92-94. [12]  Personal Data Protection Bill, 2018, (Draft Bill), Ch. VII, §§ 40-42. [13]   Id.  § 40. [14]  Srikrishna Committee Report, supra note 16, at 86. [15]  Personal Data Protection Bill, 2018, (Draft Bill), Ch. VII, § 33. [16]  See Access Now, Article 19 & Internet Freedom Found., Joint Statement on the Personal Data Protection Bill, 2019 (Dec. 12, 2019). [17]  Letter from Wilbur Ross, U.S. Secretary of Commerce, to Piyush Goyal, Indian Minister of Commerce & Indus. (Nov. 14, 2019). [18]   Id.  § 16(1). [19]  See Arindrajit Basu & Elonnai Hickok, The Localisation Gambit: Unpacking India's Approach to Data Sovereignty, 2 Digital Policy, Regulation. & Governance 234, 240–45 (2020). [20]  RBI Circular, supra  note 7. [21]  See, Reserve Bank of India, Report of the Working Group on Digital Lending Including Lending Through Online Platforms and Mobile Apps 78–80 (2021). [22]  Estimated Compliance Costs for Payment Data Localization Payment Council of India Report, 23-25 (2019). [23]  Insurance Regulatory and Development Authority of India (Outsourcing of Activities by Indian Insurers) Regulations, 2017, F. No. IRDAI/Reg/7/143/2017. [24]  Department of Telecoms., Unified License Agreement ch. VIII, § 39.23 (amended 2021). [25]  National Security Directive on Telecommunication Sector, Cabinet Secretariat Order No. 13(11)/2021-T (Dec. 15, 2021). [26]  Digital Information Security in Healthcare Act, 2018, Draft Bill § 29 (Ministry of Health & Family Welfare). [27]  Clinical Establishments (Central Government) Rules, 2012, G.S.R. 361(E), r. 9. [28]  Ministry of Electronics. & Info. Tech., Guidelines for Government Departments on Contractual Terms Related to Cloud Services at 4.2 (2017). [29]  Council Directive 95/46/EC, On the Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of Such Data, art. 25, 1995 O.J. (L 281) 31. [30]  Article 29 Data Prot. Working Party, Working Document on Transfers of Personal Data to Third Countries: Applying Articles 25 and 26 of the EU Data Protection Directive, WP 12, at 5–7 (July 24, 1998). [31]  European Commission, Adequacy Decisions, https://commission.europa.eu/law/law-topic/data-protection/international-dimension-data-protection/adequacy-decisions_en . [32]  Case C-362/14, Schrems v. Data Protection Commissioner, 2015 E.C.R. I-650, at 73–98. [33]  Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data (General Data Protection Regulation), Ch. V, 2016 O.J. (L 119) 1 [hereinafter GDPR]. [34]   Id.  art. 45(3). [35]   Id.  art. 46(2)(c). [36]  Commission Implementing Decision (EU) 2021/914 of 4 June 2021 on Standard Contractual Clauses for the Transfer of Personal Data to Third Countries Pursuant to Regulation (EU) 2016/679, 2021 O.J. (L 199) 31. [37]  GDPR , supra note 34, art. 47. [38]  European Data Protection Board, Guidelines 2/2018 on Derogations of Article 49 under Regulation 2016/679, at 8-10 (May 25, 2018). [39]  GDPR, supra note 34, art. 49(1)(b)-(c). [40]  GDPR, supra note 34, art. 49(1)(d). [41]  Case C-311/18, Data Protection Commissioner v. Facebook Ir. Ltd. & Maximillian Schrems, ECLI:EU:C:2020:559 (July 16, 2020) [hereinafter Schrems II]. [42]   Id.  at 165-185. [43]   Id.  at 134-135 [44]  European Data Protection Board, Recommendations 01/2020 on Measures that Supplement Transfer Tools to Ensure Compliance with the EU Level of Protection of Personal Data, Version 2.0, at 17-29 (June 18, 2021). [45]  Kenneth A. Bamberger & Deirdre K. Mulligan, Privacy on the Ground: Driving Corporate Behavior in the United States and Europe  189-215 (MIT Press 2015). [46]  Rahul Sharma, The DPDP Act's Blacklist Approach: Innovation or Abdication?  12 J. Tech. L. & Pol'y 234, 240 (2024). [47]  GDPR, supra note 34, art. 83(5). [48]  DPDPA, 2023, supra note 18, § 33. [49]  Anu Bradford, The Brussels Effect, 107 NW. U. L. REV. 1, 4-12 (2012). [50]  See Neha Mishra, Cross-Border Data Flows and India's Data Protection Bill: Regulatory Adequacy, Sovereignty and Trade, 9 Indian J. Const. L. 125, 140–48 (2019). [51]  See India-EU Strategic Partnership: A Roadmap to 2025, Joint Statement (July 15, 2020). [52]  See IAPP-EY Annual Governance Report 2021: Privacy Engineering and the Rise of the Data Protection Officer, 34-38 (2021). [53]  European Data Prot. Bd., Frequently Asked Questions on the Judgment of the Court of Justice of the European Union in Case C-311/18 7–9 (July 23, 2020). [54]  See Rahul Matthan, India's Data Protection Law: A Work in Progress, Takshashila Inst. Pol'y Brief 15–18 (Aug. 2023). [55]  Eur. Ctr. for Int'l Pol. Econ., The Economic Impact of Data Localization 3–5 (2019). [56]  Nat'l Ass'n of Software & Servs. Cos., Annual Report 2022-23, at 28–31 (2023). [57]  Invest India, Data Centre Industry in India: Market Landscape and Investment Opportunities 14–17 (2023). [58]  Andrew McAfee & Erik Brynjolfsson, Machine, Platform, Crowd: Harnessing Our Digital Future, 156-72 (2017). [59]  See Michèle Finck, Blockchain Regulation and Governance in Europe, 210-25 (2019). [60]  See Nat'l Inst. of Standards & Tech., Post-Quantum Cryptography Standardization: Report on the Third Round, 5–8 (2022). [61]  Clarifying Lawful Overseas Use of Data Act (Cloud Act), Pub. L. No. 115-141, 132 Stat. 348 (2018).

  • Principle of Non-Discrimination in AI Governance | Part I

    Part I | Understanding how artificial intelligence (AI) systems can inadvertently or systematically perpetuate discrimination?  This part of the article explores the need for OECD AI Principle 1.2, focusing on non-discrimination throughout the AI lifecycle. By examining five ways through which bias can infiltrate AI systems—ranging from discriminatory class labels to problematic proxies—the discussion highlights the critical need for robust anti-bias measures in AI governance. Understanding these mechanisms is vital to ensuring equitable and fair AI development amidst a rapidly evolving technological frontier. Mihir Nigam* INTRODUCTION There is an apparent multi-dimensional divergence in how the countries across the globe are approaching the governance of artificial intelligence. The major risk in such situation is that the countries, in absence of a global policy on the matter, would create a fragmented regulatory landscape , reminiscent of the current data transfer regulations. To counter such risk at the earliest, in a major development, the European Union has legislated upon the EU Artificial Intelligence Act  (“ the Act ”). Upon the approval of the Council of European Union on 21 May 2024, the act enters into force across all the EU states on 1 August 2024. The Act is guided with a risk-based approach, and it seeks to regulate the most controversial technology of our times, its uses, and its developers. The AI Act, among its many objectives, was brought to address the urgency for international alignment on AI governance. The urgency stems from concerns raised by whistleblowers within the industry who are wary of the rapid pace at which development in AI is taking place and the major societal implications that could arise if AI systems are developed and deployed carelessly in a fragmented regulatory landscape. [1]  In this context, the AI Principles, prepared by the OECD, have been carefully crafted to minimize risks and provide a foundation for countries to align and formulate their policies. Out of the many principles provided by the OECD, the topic for exploration of this article is OECD AI Principle 1.2 (“ Principle 1.2 ”), the principle that emphasizes non-discrimination throughout the AI system’s life cycle. [2]   To appreciate this principle, it is first important to understand that how easily discriminatory behavior can creep into the AI system. Barocas and Selbst ,  in their article have highlighted five ways   in which the AI-decision making can lead, unintentionally, to discrimination. [3]  In the following paragraphs, I have tried to establish the need for non-discrimination principle in AI governance by discussing the five issues highlighted by Barocas and Selbst , using lucid language and relevant examples. (1)     Discrimination through “target variables” and “class labels” During the training of the AI systems, the target outcome desired out of the model is called a “ target variable ,” and the when the possible values of the target values are divided into mutually exclusive categories they are called “ class labels .” [4]  They are relevant for our purpose to understand that, sometimes, defining the target labels requires creation of a separate class labels. For example, consider a situation where a company requires an AI system for sorting out the job applications, now in order to select a good employee (target variable); we need to create separate class labels by defining who is really a good employee – the one who is never late at work (class label #1); or the one who performed better by making better sales (class label #2). [5] Now consider this proposition­— the poor people who often live outside the city and have to travel 40-50km to reach the office and face traffic jam or public transport issue —are the ones who are usually late to work than other employees. Further, if it could be established that immigrants are on average, poorer, and live outside the city. In such case, the choice of going ahead with class label #1  will lead to discrimination with respect to the immigrants with disadvantaged background. Another example can be, a scenario where an AI system is designed to classify loan applications for a financial institution. Here, the target variable  could be “ the likelihood of a loan application being approved .” The class labels might be categories such as “ Stable Employment ,” “ Bank Transaction History ,” “ Place of Residence/Region .” Now, if the first- class label is based   on criteria of stable employment, this might disadvantage applicants from rural areas or people with seasonal employment. As, in India, many small farmers or seasonal workers live in less urbanized regions and may have less stable employment due to the seasonal nature of work or fluctuating local markets. Furthermore, the mode of payment in such employments is usually “cash,” therefore, there the transaction that they have will not reflect under their bank transaction history. Therefore, if these applicants are evaluated on such criteria, the AI might unfairly label them under “ low-chance of approval ” category even though they might have a solid financial standing. (2)     Prejudiced Labelling in Training the Model Data Labelling is the process of manually assigning class labels to data, either based on pre-existing categorizations or through subjective judgments when no labels are available. This process can introduce biases if the labels are based on flawed or prejudiced criteria. [6]   For example , if historical decisions or labels used to train an algorithm reflect past prejudices, the algorithm may perpetuate those biases. And there always remains a possibility that AI system is trained on a biased data set. Usually, the AI models are trained on a huge pile of data, and during the training of such model if the data has not been labelled correctly and the measures are not taken to rule out the bias from the data set, this is an accepted fact that the AI system is only going to reproduce such bias. The discriminatory effects of this can be observed in two ways – i.) When the AI system is training on a data that is inherently biased; ii.) When the AI system learns from the biased sample. [7] For instance , reports surfaced that women-founded start-ups receive significantly less venture capital than those founded by men. In the year 2019, just 2% of venture capital was directed towards start-ups founded by women. [8] Now, if this data is fed to an AI system designed for helping investors to make important investment related decisions, the system can very well suggest an investor to invest in the start-ups founded by a man, because a logical conclusion ( that it is the best course of action ) can be drawn from the fact that majority of the venture capitalist are diverting their capital towards startups led by a man.   Such algorithmic bias is not a new problem as it has been an issue since the early development of AI systems. In late 1970s, Dr. Geoffrey Franglen’s admissions algorithm, at St. George’s Hospital Medical School, was under scrutiny. The algorithm was aimed to streamline and standardize the admissions process, however, despite its goal to reduce human bias, it inadvertently reinforced existing prejudices. By using biased historical data, the algorithm unfairly treated non-Caucasian names and female applicants. [9] This early case demonstrated that algorithms, while designed to emulate human decision-making, can perpetuate, and institutionalize biases rather than eliminate them. " the AI models are trained on a huge pile of data, and during the training of such model if the data has not been labelled correctly and the measures are not taken to rule out the bias from the data set, this is an accepted fact that the AI system is only going to reproduce such bias. (3)     Discrimination in Data Collection Data Collection is the phase of the project in which data is ingested from multiple sources. Challenges stem from incomplete or inaccurate data or data that do not represent groups well. For example, if the records for underrepresented groups are less complete or less accurate because of structural biases, this skew might under or overrepresent those groups in your dataset. [10] This could be further skewed when there are socioeconomic factors that alter participation, technology access, or geographic representation.  This could therefore systematically bias the result against protected classes whenever decisions were made based on flawed data, due to the failure to represent the data in an unbiased and proportionate form. Bias in data collection methods can have significant societal consequences. Especially, when the critical governmental decisions are informed by AI systems that are inherently biased due to flaws in their data collection processes, these biases can be perpetuated and exacerbated. For example , the case of Street Bump application , which utilized GPS technology to report road conditions to municipal authorities, relied on volunteer users to gather data. [11] The application’s website stated: “ Volunteers use the Street Bump mobile app to collect road condition data while they drive. This data provides governments with real-time information to address issues and plan long-term infrastructure investments .” However, if there is a lower prevalence of smartphone users among economically disadvantaged populations compared to wealthier individuals, there is a risk that road conditions in poorer areas may be underreported. [12]  Consequently, this underrepresentation could result in fewer repairs and less attention to infrastructure problems in these underserved communities. " Bias in data collection methods can have significant societal consequences. Especially, when the critical governmental decisions are informed by AI systems that are inherently biased due to flaws in their data collection processes, these biases can be perpetuated and exacerbated. (4)     Discrimination by way of features in an AI system Incorporating specific  features  into an AI system’s decision-making process can inadvertently introduce bias against certain groups. It is introduced in essence to ease out the operation process in algorithm, as it becomes impossible for accessing each input completely, due to the system constrains or the cost of the operations. Therefore, certain features or attributes or characteristics are chosen for the purpose of prediction. For more clarity on this , consider the survey which reported that many employers in India tend to favour candidates who have graduated from prestigious and expensive universities in London. [13]  In this backdrop, it becomes important to note that individuals from certain racial or socioeconomic backgrounds may be underrepresented in these elite institutions due to various systemic barriers. Now, a scenario where an AI system is designed to screen job applications and the features selected from sorting applications is the “ educational background” of applicants. If the AI is programmed to prioritize candidates from high-status foreign universities , it may disproportionately disadvantage applicants from marginalized racial groups, who are less likely to have attended such institutions. This selection criterion could lead to a biased hiring process, where qualified individuals are overlooked simply due to their educational background rather than their actual capabilities or experiences. The other example here can be the case of incorporating certain features into health insurance underwriting algorithms. Such algorithm can unintentionally introduce bias against specific groups. For instance, insurance companies prioritize features such as body mass index (“ BMI ”) and history of pre-existing conditions in their risk assessments. [14] However, individuals from lower socioeconomic backgrounds or marginalized communities may face unique health challenges and barriers that affect these factors differently. Imagine a scenario where an AI system is designed to evaluate health insurance applications and includes features such as “ BMI ” and “ frequency of doctor visits .” If the AI system is programmed to weigh these features heavily, it may disproportionately disadvantage individuals from communities with limited access to healthcare or those who experience higher rates of chronic conditions due to systemic inequities. For example, people from lower-income neighbourhoods might have low BMIs and less frequent medical check-ups due to financial constraints and lack of healthcare access. This criterion could lead to a biased underwriting process, where qualified individuals are unfairly denied coverage or charged higher premiums based on factors beyond their control, rather than their actual health risks or needs. (5)     Problem with the proxies When training an AI system, it is crucial to be aware that some data points in the training set might inadvertently correlate with protected characteristics, even if they are not explicitly included. As noted by Barocas and Selbst , “ criteria that are genuinely relevant in making rational and well-informed decisions also happen to serve as reliable proxies for class membership .” [15] Imagine a real estate company uses an AI system to determine the likelihood of prospective tenants paying their rent on time. The system is trained on historical data that includes factors like income, occupation, and rental history, but does not explicitly include information about religion or caste. The AI system identifies that tenant from certain neighbourhoods, such as affluent areas in Bengaluru or Delhi, tend to have fewer issues with timely rent payments. Consequently, the AI system uses neighbourhood data as a predictor for assessing the likelihood of timely payments. At first glance, using neighbourhood data might seem neutral.  However, if certain neighbourhoods have strong correlations with socio-economic backgrounds, or even specific castes or religions, then the AI system’s reliance on this neighbourhood information might indirectly disadvantage individuals from less affluent areas who might belong to marginalized communities. These correlations are not directly related to protected characteristics but could serve as proxies. For instance, if lower-income neighbourhoods are more prevalent among certain marginalized groups, the AI’s predictions could result in unfair treatment of applicants from those areas, regardless of their actual financial reliability. Another example involves the job market. Suppose a company uses an AI-driven recruitment tool to screen job applications for managerial positions. The AI is trained on historical data that includes information about educational institutions attended and previous job titles. If certain prestigious institutions or companies are more commonly associated with certain socio-economic backgrounds or regions, the AI might favour candidates with backgrounds from these prestigious institutions, inadvertently sidelining candidates from less prestigious but equally capable backgrounds. " The OECD Principle 1.2 is necessitated to guide the creation of AI systems that are not only transparent and accountable but also designed to prevent the perpetuation of bias, thereby helping to break the vicious cycle and foster a more equitable technological landscape. The Vicious Cycle of Biased Output Addressing the proxy problem is challenging. Barocas and Selbst highlight that “ computer scientists have been unsure how to deal with redundant encodings in datasets .” [16] Simply excluding variables that could be proxies for protected characteristics might remove criteria that are otherwise relevant for making informed decisions. As a result, one potential solution could be to intentionally reduce the overall accuracy of the AI system to ensure that decisions do not systematically disadvantage members of protected classes. [17]  This approach aims to balance fairness with accuracy, acknowledging that avoiding discrimination may sometimes come at the cost of less precise predictions. Alongside these issues, a concerning phenomenon that can arise is the creation of a vicious cycle of bias , particularly with generative AI models. This cycle occurs when an AI model, trained on biased datasets, generates outputs that perpetuate the same biases. These biased outputs then become part of future training datasets, thereby reinforcing, and magnifying the original biases in subsequent iterations of the model. For example, imagine a generative AI model trained on historical news articles that exhibit biased reporting patterns. If the model generates news content based on these biased patterns, the generated content might reflect and perpetuate the same biases. When this biased content is incorporated into new training datasets, it risks embedding these biases deeper into future models. This cycle not only maintains but can exacerbate existing biases over time, creating a compounding effect that can be difficult to rectify. This issue is compounded by the “ black box ” nature of many AI systems, where the decision-making processes are opaque and not easily interpretable. When biases are perpetuated through this feedback loop, it becomes increasingly challenging to identify, understand, and address   them. As a result, it might be too late to effectively mitigate the problem once the biases are deeply embedded in the AI system’s behaviour. Therefore, by promoting the design and implementation of AI systems that actively avoid and address bias through, the non-discrimination principle aims to ensure that AI technologies are developed and deployed in ways that uphold fairness and equality. Therefore, The OECD Principle 1.2  is necessitated to guide the creation of AI systems that are not only transparent and accountable but also designed to prevent the perpetuation of bias, thereby helping to break the vicious cycle and foster a more equitable technological landscape.  _____________________________________________________________________   *Mihir Nigam  is a 5th-year Intellectual Property Law (Hons.) student at National Law University, Jodhpur. He serves as the Team Lead for AI & Data Protection at the Centre for Research in Governance, Institutions, and Public Policy at the same university. Pdf version below (The next part will cover how different nations are adopting the Principle 1.2  and crafting their laws, regulations, and policies to mitigate biases and ensure ethical AI deployment.) References: [1] Pause giant AI experiments: An Open Letter, Future of Life Org, signed by more than 33k signatures. The same can be accessed from here: https://futureoflife.org/open-letter/pause-giant-ai-experiments/   [2] Principle 1.2 on Human-centred values and fairness, The OECD AI Principles adopted in May 2019. The same can be accessed from here: https://oecd.ai/en/ai-principles . [3]  Barocas S. and Selbst AD., ‘Big Data’s disparate impact’ (2016) 104 Calif Law Rev 671. The same can be accessed from here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2477899   [4] Unit 2, AI Tools by PVP Siddharatha. For understanding on the basics of Machine Learning, Target Variables and Class Labels, refer to: https://www.pvpsiddhartha.ac.in/dep_it/lecture%20notes/AI%20TOOLS/AITools_Unit-2.pdf   [5]  Barocas S. and Selbst AD. (2016) [6] Braun T., Pekaric I., Apruzzese G., Hong J., Park J.(2024), Understanding the Process of Data Labeling in Cybersecurity Proceedings of the 39th ACM/SIGAPP Symposium on Applied Computing, online publication date: 8-Apr-2024. The same can be accessed from here: https://dl.acm.org/doi/10.1145/3605098.3636046 . [7]  Barocas S. and Selbst AD. (2016) [8] Global Entrepreneurship Monitor. (2020). Global report 2020/2021. London: Global Entrepreneurship Research Association. [9] Murdoch JB., The problem with algorithms: magnifying misbehaviour, published on The Guardian on August 14, 2013. The same can be accessed from here: https://www.theguardian.com/news/datablog/2013/aug/14/problem-with-algorithms-magnifying-misbehaviour   [10] Favaretto, M., De Clercq, E. & Elger, B.S. Big Data and discrimination: perils, promises and solutions. A systematic review. J Big Data 6, 12 (2019). The same can be accessed from here: https://doi.org/10.1186/s40537-019-0177-4 .  [11] Crawford K., The Hidden Biases in Big Data, Harvard Business Review (Analytics and Data Science) published on April 01, 2013. The same can be accessed from here: https://hbr.org/2013/04/the-hidden-biases-in-big-data  . [12]  Barocas S. and Selbst AD. (2016) [13] Campus roundup: Returning London graduates help Indian firms go global, published by the Financial Express on June 24, 2013. The same can be accessed from here: https://www.financialexpress.com/archive/campus-roundup-returning-london-graduates-help-indian-firms-go-global-study/1132802/   [14] For reference, consider the factors that affect health insurance policy, by Aditya Birla Capital. https://www.adityabirlacapital.com/abc-of-money/factors-that-affect-your-health-insurance-premium   [15]  Barocas S. and Selbst AD. (2016) [16]  Barocas S. and Selbst AD. (2016) [17]  Prof. Borgesius FZ., Study on Discrimination, Artificial Intelligence, and Algorithmic Decision Making, published by Directorate General of Democracy, Council of Europe. The same can be accessed from here: https://www.coe.int/en/web/artificial-intelligence/-/news-of-the-european-commission-against-racism-and-intolerance-ecri-

  • Legal Frameworks for AI Discrimination: Global Practices and India’s Policy Landscape | Part II

    Part II | Principle of Non-Discrimination in AI Governance This part of the article examines global legal approaches to AI discrimination and contrasts them with India's evolving tech policy. It reviews international frameworks and regional regulations, highlighting diverse approaches. The analysis focuses on India’s tech-policy landscape, including the Digital India Act, 2023, and TRAI’s recommendations. The article argues for comprehensive AI legislation, emphasizing the need for specialized regulations to effectively address problem of discrimination and bias in AI systems. *Mihir Nigam INTRODUCTION The main argument against a special legislation for regulation of artificial intelligence is based on the premise that existing horizonal laws, like consumer law, data protection law, competition law, and labour laws , are already governing the various verticals of the market, and will also be relevant in application of artificial intelligence systems, and goods and services utilizing such system. [1]   This argument, stems from the reasoning that a separate AI legislation is only going to hinder innovation, with stringent rules and chaotic application of law, possibly creating a landscape in which beneficial applications of the AI systems might need to be discontinued. Notwithstanding the intent of the horizontal laws, to evaluate the merit of this argument, the efficacy of the horizontal laws in addressing discrimination warrants thorough scrutiny. Understanding the Limitations of Existing Legal Framework in Regulating AI Discrimination In the context of non-discrimination, in India, the legal landscape is scattered and outdated, because of which there has been a long-standing demand for a comprehensive anti-discrimination law. [2] The Constitution of India provides for both – the right against discrimination and the protective discrimination [3]  – under Article 14 to 17. Furthermore, i ) the Scheduled Caste and Scheduled Tribe (Prevention of Atrocities) Act, 1989; ii) Protection of Civil Rights Act, 1955; iii) the Transgender Persons (Protection of Rights) Act, 2019 ; contains many provisions of punishment for those enforcing social disabilities, refusing to admit people into public institutions, and denying access to goods and services on the basis of caste, sex, or gender. However, the constitutional right against discrimination is primarily focused on the relationship between the state and its citizens. Consistent judicial rulings have established that writs— excluding the writ of habeas corpus —are mainly enforceable against the state. [4] That same can be seen from the case of Dr. Anand Gupta v. Rajghat Education Centre and Others , [5] where it was held that “ the writ petition is not maintainable, as it has been filed against a private body, namely, Rajghat Education Centre, Varanasi………Ordinarily, no writ lies against a private body except a writ of habeas corpus .” The big question that now rises is: Whether a person has any remedy or legal recourse in scenarios, such as those explored in Part I , where an AI system used by a private body ( not under the ambit of state under Article 12 ) systematically discriminates against candidates from a specific category? The answer to which is, obviously a “No.” As unlike the Indian Constitution, which prohibits the state from discriminating against any citizen based on religion, race, caste, sex, place of birth, or any combination thereof, there is no comparable comprehensive legislation that applies to the private sector. [6] The hiring and firing practices of private firms and companies are predominantly governed by their internal bye-laws and agreements. However, it is advisable for these organizations to implement a comprehensive anti-discrimination policy to effectively address potential biases and discriminatory practices. A study by Mamgain, confirms the fact that employers practice subtle forms of discrimination among workers during the recruitment process. Employers are more interested in factors like “ family background,” “ employee referrals ,” “communication, and language skills of the applicant,”  which may not relate to the job and puts many job applicants from lower-class backgrounds at a decided disadvantage. [7] While discrimination in hiring is not directly regulated by law, some legislations exist that protect against workplace discrimination in the private sector once employment has commenced. Specifically, there are laws against harassment and discriminatory practices in respect of women, including those who are pregnant or on any form of maternity leave, those with disabilities, transgender persons, those with HIV and AIDS, and a special category of non-managerial employees (known as “ workmen ” under the Indian industrial relations laws). [8] These equalities are cumulative in nature and relate to what is known as “ Protected Criteria .” These criteria are considered “ protected ” because they are legally recognized as essential for ensuring fair treatment and equal opportunities. Laws are established to prevent individuals from being disadvantaged or treated unfairly based on these protected criteria. Consider an example where an employer implements an AI profiling and evaluation system for promotional decisions, and that system is trained on a biased dataset which results in unfairly excluding individuals in the “ protected criteria ” from consideration, the remedy for such discrimination may be found under relevant discrimination laws. This is because the issue pertains to the unfair treatment of individuals based on their inclusion in a “ protected class .” A study by Mamgain,  confirms the fact that employers practice subtle forms of discrimination among workers during the recruitment process. Employers are more interested in factors like “ family background,” “ employee referrals ,”  “communication, and language skills of the applicant,”  which may not relate to the job and puts many job applicants from lower-class backgrounds at a decided disadvantage. Addressing workplace discrimination involves more than merely adhering to minimum legal standards and focusing on “ protected criteria. ” The fact that measures to prevent discrimination are largely left to the employer’s discretion can be counterproductive, especially given that private sector employees often lack statutory rights and specific legal remedies for discrimination based on factors such as religion, race, caste, or community. [9] Without robust legal frameworks mandating comprehensive anti-discrimination practices, employees in the private sector may find it challenging to seek redress for unfair treatment. Therefore, relying solely on employer discretion may not suffice to ensure a fair and inclusive workplace, highlighting the need for more stringent and uniform regulatory measures. Hence, in conclusion, as far as the applicability of various horizonal laws to the AI systems are considered, there lie few problems – 1)   In absence of a national anti-discrimination law, the horizontal laws may not be effective in adequately addressing the problem of discrimination in public as well as private sector. 2)    Even if the anti-discrimination law is present, when such discrimination is imbued in an artificial intelligence system, it then becomes much more difficult to determine whether discrimination has occurred. The burden and difficulty of proving discrimination in that case would thus fall on the victim, leaving him/her without any effective remedy. Therefore, a special piece of legislations for regulating the AI technologies is required. Such legislations should include principles of equality and non-discrimination and should protect rights such as the right to not be subjected to decisions made by an autonomous decision-making system. In addition, legislation should provide an effective mechanism for grievance redressal for the prompt and sufficient redressal of incidents of discrimination. The current horizontal legal frameworks may prove insufficient in addressing the discrimination perpetuated by AI systems. Consequently, there is a pressing need for a principles and rule-based regulatory approach, particularly in contexts like India, where existing legal structures are failing to adequately address discrimination within the private sector. Bias in AI has recently become an extremely concerning issue. The cost of addressing biases after systems are deployed is not only economically unfeasible but often impossible. If we cannot reliably ensure that bias is excluded from the data pipeline, then the potential harm caused by biased data is compounded through its interaction with the design choices present in the model. Therefore, special legislation addressing the possibility of design bias in models, along with efforts to manage its influence, is crucial for reducing potential harm. The stance coming from industry is that the issue of discrimination in AI systems is intricate issue, and it may be more appropriate to leave the resolution of such details to sector-specific regulators in the initial stages. [10] From General to Specialized AI in Ensuring Impartiality for High-Stakes Decision-Making Systems We need to accept that a good proportion of the literature before us in the form of history, political analysis, news reports, online information, articles, and opinions is generally written from a prejudiced/biased point of view. We simply cannot afford to ignore these resources in the name of non-discrimination. At least, for general-purpose conversational agents, design, and development strategies for mitigating bias and strengthening impartiality in outputs presented to users typically offset the biases trained into a system from possibly biased data. However, the case should be different when one considers specialized artificial intelligence systems, mainly those used in decision-making contexts that have significant consequences for individuals— like AI systems to assist in judicial decisions, such as bail determinations, sentencing recommendations, or parole decisions; systems that assist in diagnosing medical conditions or recommending treatment plans; systems for selecting applicants for college admissions; systems for hiring and promotion in company; system for credit scoring and loan approval . We must be very cautious of such scenarios. Sources of prejudice or bias should not be used in the training datasets for such specialized systems. This involves a scrutiny of sources of data, the processes for data annotation, and training models in such a way that the parameters of fairness and objectivity are fully met. Therefore, the threshold for meeting standards of non-discrimination should be high in such cases. How Different Countries are Adopting Non-Discrimination Standards in AI Regulations The OECD has been a major player in this effect, providing for intergovernmental standards on the artificial intelligence. Five value-based principles were adopted by OECD in May 22, 2019, under the Section 1 of “ Recommendation of the Council on Artificial Intelligence ”, [11] to promote the innovative and trustworthy use of AI, respecting the human rights and democratic values. [12]   The AI Principle 1.2, titled as “ Respect for the rule of law, human rights and democratic values, including fairness and privacy ” provides that – “a) AI actors [13] should respect the rule of law, human rights, democratic and human-centred values throughout the AI system lifecycle . These include non-discrimination and equality , freedom, dignity, autonomy of individuals, privacy and data protection, diversity, fairness, social justice, and internationally recognised labour right………….   b)  To this end, AI actors should implement mechanisms  and safeguards , such as capacity for human agency  and oversight , including to address risks arising from uses outside of intended purpose, intentional misuse, or unintentional misuse in a manner appropriate to the context and consistent with the state of the art.” As far as the incorporation of this principle goes under the national legislations, the countries have shown two regulatory approaches in addressing the discrimination under their proposed or legislated AI regulations – Countries with Explicit Definitions of Discrimination : In first category, the countries have clear and specific legal definitions of discrimination within their regulations. This includes countries like Brazil, Canada, EU, and South Korea. [14]   Countries Regulating Discrimination Without Explicit Definitions : These countries aim to address discriminatory practices but do so without providing a precise legal definition of discrimination. Their approaches might be more general or implicit. This includes countries like Argentina, China, and US. The regulatory landscape for artificial intelligence (“ AI ”) systems exhibits considerable variance depending on regional approaches and specific legislative frameworks. [15] For instance, Argentina, South Korea, and the United States have adopted broad regulatory measures that encompass all AI systems, reflecting a comprehensive stance on the governance of AI technologies. In contrast, Canada and the European Union have tailored their regulatory efforts to focus predominantly on high-risk AI systems, thereby prioritizing oversight based on the potential impact and risks associated with these technologies. Meanwhile, China’s regulations are notably specific, targeting particular technological domains such as generative AI and recommendation algorithms. This nuanced approach suggests a targeted strategy aimed at addressing the most immediate and impactful aspects of AI development. Brazil presents a hybrid model, incorporating both general provisions applicable to all AI systems and specific regulations addressing high-risk and biometric systems. This dual approach shows Brazil’s attempt to balance broad regulatory oversight with targeted measures for high-risk categories. The diversity in regulatory strategies underscores the complexity and varied priorities across jurisdictions in managing the risks and opportunities presented by AI technologies. India’s AI Governance Strategy and integration of Non-Discrimination Principle India seeks to advance its data and technological governance through the proposed Digital India Act, 2023, which is poised to supplant the IT Act of 2000. The Ministry of Electronics and Information Technology (MeitY) asserted  that the bill would bring about a transformative change in the regulatory framework governing emerging technologies such as artificial intelligence and machine learning. However, the IT & Telecom Minister Ashwini Vaishnaw, in a communication addressed to the Parliament on April 5 2023, took a diverging stance, and conveyed that the government does not intend to impose regulations on the flourishing growth of AI in India. [16] Looking at the pace of AI development and emerging risks, the government again changed its stance from “ not willing to regulate ” to “ we are working on regulation .” [17] But what changed in a gap of about one year? Over the past year, AI technology has advanced significantly, revealing new complexities and potential harms. This evolution has heightened the anticipation of AI becoming a transformative force in various sectors. Concurrently, major economic powers have proposed or enacted specific AI regulations, prompting India to reconsider its initial stance. The increasing sophistication of AI, global trend toward targeted legislation and increased geo-political significance prompted a need for a robust regulatory framework to address both opportunities and challenges. There seems to be a growing recognition of the need to establish AI principles tailored to India’s context. The intention is to strategically align India’s AI policies with international standards, ensuring they support and facilitate, rather than hinder, business and technology transfer. By developing specific AI guidelines and principles, the government is hoping to harmonize its regulatory approach with global norms, thus fostering an environment that supports innovation while addressing potential risks. Over the past year, AI technology has advanced significantly, revealing new complexities and potential harms. This evolution has heightened the anticipation of AI becoming a transformative force in various sectors. Concurrently, major economic powers have proposed or enacted specific AI regulations, prompting India to reconsider its initial stance. The increasing sophistication of AI, global trend toward targeted legislation and increased geo-political significance prompted a need for a robust regulatory framework to address both opportunities and challenges. In the context of non-discrimination and ethical governance of AI, three significant documents introduced in recent years have had a profound impact on the tech-policy landscape in India: I.  Responsible AI Approach Document (Part II) Niti Ayog introduced the Responsible AI: Approach Document for India (Part II) in August 2021. [18] The approach document emphasized that identifying AI Governance principles is the essential first step, that needs to be complemented by the mechanisms required for adherence to these principles towards ensuring a responsible AI ecosystem. [19] Further, it went on to enlisting certain important principles of responsible  AI in India (“RAI”) that included - i) safety and reliability, (iii) equality, (iii) inclusivity and non-discrimination, (iv) privacy and security, (v) transparency, (vi) accountability and protection (vii) and reinforcement of positive human value . Out of which, principle (i) and (ii) are important for our discussion: (a)     The Principle of Equality  provides that “ the systems must treat individuals under the same circumstances relevant to the decision equally ;” and (b)     The Principle of Inclusivity and Non-discrimination provides  that “ AI systems should not deny opportunity to a qualified person on the basis of their identity. It should not deepen the harmful historic and social divisions based on religion, race, caste, sex, descent, place of birth or residence in matters of education, employment, access to public spaces, etc. It should also strive to ensure that an unfair exclusion of services or benefits does not happen .” According the Niti Ayog, these principles of RAI flow directly from the Constitution of India and all laws enacted thereunder and are also compatible with the principles identified by international bodies such as the Global Partnership on Artificial Intelligence (GPAI). [20]   The document also acknowledged the impracticality of imposing uniform, prescriptive guidelines for ensuring adherence to ethical principles in AI, emphasizing instead the necessity of robust governance mechanisms. [21] Such mechanisms are pivotal in fostering the development of AI systems that are reliable, predictable, and trustworthy. It argues that responsible AI considerations must be seamlessly integrated into every stage of the AI lifecycle rather than treated as a one-time compliance exercise. This approach ensures that ethical standards are continuously upheld, accommodating the evolving nature of AI technologies and their diverse applications, thus safeguarding their alignment with foundational ethical and legal norms. II. Proposed Digital India Act, 2023 The Digital India Act, 2023 (“ DIA ”), was proposed by MeitY on March 9 2023. [22] The aims, objectives, and specifics about the act were made available to public via presentation slides under Digital India Dialogues.  The DIA, set to supplant the IT Act which has been operational for approximately 24 years, is designed with several objectives. Primarily, it aims to ensure that the Indian internet remains open, safe, and trustworthy,  while fostering accountability. Additionally, the Act seeks to accelerate the growth of the innovation and technology ecosystem by managing the complexities arising from the internet’s rapid expansion and the proliferation of diverse intermediaries. It will establish a framework to expedite the digitalization of government processes, thereby strengthening democracy and governance through enhanced Government-to-Citizen (“ G2C ”) interactions. Furthermore, the Act is intended to safeguard citizens’ rights, address emerging technologies and associated risks, and ensure adaptability to future advancements, thereby maintaining its relevance in an evolving digital landscape. [23]   In reference to the development of emerging technologies, the DIA will recognise digital user rights (“ DURs ”), including the Right to be Forgotten, the Right to Secure Electronic Means, the Right to Redressal, the Right to Digital Inheritance, the Right Against Discrimination, and the Right Against Automated Decision-Making, thereby ensuring comprehensive protection and empowerment of individuals in the digital sphere. As of now, specific rules detailing the regulation of these technologies have not been released or opened for public comment and discussion. However, it is assured that the DIA will encompass provisions to combat discrimination and provide for remedy in case of automated decision making, as these rights are going to be explicitly enshrined in the DURs. The inclusion of such provisions underscores the Act’s commitment to protecting individuals from discriminatory practices in the digital realm, guaranteeing that emerging technologies and AI are regulated in a manner that upholds fairness and equality. III. TRAI’s Recommendations on Governance Of AI On 20th July 2023, the Telecom Regulatory Authority of India (“ TRAI ”) released its recommendations on “ Leveraging Artificial Intelligence and Big Data in Telecommunication Sector ”. [24] This is particularly significant given that the recommendation is not coming from a policy think tank, non-governmental organization, or academic institution, but from a sectoral regulator. While the title might suggest that these recommendations are confined to a specific sector ( i.e., the use of AI in telecommunications sector ), they actually encompass broader implications. In reality, the recommendations go beyond the telecom sector and also address the national policy and the governance of AI. The extensive 138-page document delves into various aspects of AI in India, including its proliferation, transformative potential, definition, emerging risks, and the facilitation of data availability. Broadly, the document underscores the urgent need for a regulatory framework to promote the responsible development of AI across sectors. It advocates for the establishment of an independent statutory body, proposed to be named the Artificial Intelligence and Data Authority of India (“ AIDAI ”). This authority would be tasked with overseeing the development and regulation of AI use cases in India, ensuring that the framework addresses sector-specific nuances while maintaining a unified approach to AI governance. In the document, TRAI acknowledged bias as one of the most pressing challenges in the domain of AI, given its potential to adversely affect both individuals and society. [25] They argued that it is crucial to identify and mitigate bias in AI systems to ensure their fairness, transparency, and accountability. Achieving this requires a multidisciplinary approach that extends beyond technical solutions to include social, ethical, and legal considerations. This holistic strategy is necessary to address the complexities of bias and to develop AI systems that uphold equitable standards across various contexts. The Way Forward The current horizontal legal frameworks may prove insufficient in addressing the discrimination perpetuated by AI systems. Consequently, there is a pressing need for a principles and rule-based regulatory approach, particularly in contexts like India, where existing legal structures are failing to adequately address discrimination within the private sector. Incorporating a non-discrimination principle into a special AI legislation, even without a precise definition, may be advantageous for India, given the ongoing ambiguity regarding the various manifestations of AI-induced discrimination. It is essential to impose obligations on AI developers to adopt a responsible AI-by-design approach, ensuring adherence to principles of equality and non-discrimination. This regulatory evolution aims to foster a more equitable and accountable AI landscape. India is still in the early stages of developing comprehensive legislation specifically focused on AI, with formal discussions having only just begun. To navigate this complex landscape, it is imperative that extensive studies be funded to examine the impact of AI across various sectors, with consultations involving major stakeholders. Regarding the issue of self-regulation, I align with Sam Altman’s perspective. During a fireside chat at the Indraprastha Institute of Information Technology he stated, “ Self-regulation is important and is something that we want to offer, but I don't think that the world should be left entirely in the hands of the companies either, given what we think is the power of this technology .” [26] Given the profound implications of AI technology, it is prudent to establish general regulations and guidelines governing the development, deployment, and use of AI. Additionally, sectoral regulators should be tasked with formulating sector-specific rules that align with global standards while addressing India’s unique requirements and needs. In all these considerations, it is crucial not to overlook that innovation must be safeguarded and not impeded under any circumstances. The focus should be on fostering an environment where technological advancements are encouraged, but also accompanied by robust measures for risk management and ethical oversight. This dual focus will help in harnessing the transformative potential of AI while mitigating its adverse effects. Capacity building should be central to India’s Tech Policy, emphasizing the need to raise awareness about both the potential harms and the beneficial use cases of AI technology. The overarching goal should be to enhance the quality of life by leveraging AI advancements responsibly and effectively. *Mihir Nigam  is a 5th-year Intellectual Property Law (Hons.) student at National Law University, Jodhpur. He serves as the Team Lead for AI & Data Protection at the Centre for Research in Governance, Institutions, and Public Policy at the same university.     References: [1]  Williamson, Brian. Aligning regulation and AI, Communication Chambers , an independent report funded by google (July 2024) [2] Y., Hariharan. Anti-Discrimination Laws in India: A Need for Reform , 5 Issue 1 Indian J.L. & Legal Rsch. 1 (2023).  [ 8 pages, 1 to 8] [3]   Protective discrimination is when the government enacts policies giving additional privileges to sectors of society that are depressed and disadvantaged. What is called affirmative action in the United States bears the same concept, even though the forms taken by this policy would differ significantly from country to country. For example, Article 15 and 16 provides for protective discrimination by aiming to protect women, children, scheduled castes, scheduled tribes, backward castes, and other economically weaker section of the society. [4]   Army School v. Shilpi Paul.,  2005 (1) ESC 342 [5]   Dr Anand Gupta v. Rajghat Education Centre and Ors , 2003 (1) A WC 503 [6]  Khaitan & Co, ‘ Workplace Discrimination in the Private Sector ’ ( Khaitan & Co ) < https://compass.khaitanco.com/workplace-discrimination-in-the-private-sector > accessed 9 August 2024. [7] Rajendra P. Mamgain, Formal Labour Market in Urban India: Job Search, Hiring Practices and Discrimination , 2019,   New Delhi: SAGE Publications, xxvii+313 p 265. [8]   Anti-Discrimination Laws in the Indian Private Sector: A Toothless Tiger? < https://compass.khaitanco.com/workplace-discrimination-in-the-private-sector > accessed 9 August 2024. [9] Ibid. [10]   Supra, note 1. [11]   Recommendation of the Council on Artificial Intelligence , OECD Legal/0449, Adopted on:  22/05/2019 and amended on:  03/05/2024 [12]   AI Principles  (OECD). https://www.oecd.org/en/topics/sub-issues/ai-principles.html [13] As per the recommendation, “ AI actors are those who play an active role in the AI system lifecycle, including organisations and individuals that deploy or operate AI .” [14]  Data Policy Alert, The Anatomy of AI Rules: A systematic comparative analysis of AI rules across the globe , in collaboration with Law and Economics Foundation St. Gallen. https://digitalpolicyalert.org/ai-rules/the-anatomy-of-AI-rules   [15] Ibid. [16]   No Regulations for Artificial Intelligence in India, Business Today (Apr. 6, 2023), https://www.businesstoday.in/technology/news/story/no-regulations-for-artificial-intelligence-in-india-it-minister-ashwini-vaishnaw-376298-2023-04-06 . [17]   Government Working on Regulation for AI , Economic Times (Aug. 23, 2024), https://economictimes.indiatimes.com/tech/artificial-intelligence/government-working-on-regulation-for-ai-it-minister-ashwini-vaishnaw/articleshow/111454670.cms?from=mdr . [18] NITI Aayog, Responsible Ai: Approach Document for India (Part 2) - Operationalizing Principles for Responsible AI  (Aug. 2021), https://www.niti.gov.in/sites/default/files/2021-08/Part2-Responsible-AI-12082021.pdf . [19]  Ibid, pg. 4. [20]  Ibid, pg. 4. [21]  Ibid, pg. 8. [22]   Presentation by MeitY on Proposed Digital India Act, 2023 , Digital India Dialogues (Mar. 9, 2023), Bengaluru, Karnataka, https://www.meity.gov.in/writereaddata/files/DIA_Presentation%2009.03.2023%20Final.pdf . [23]  Ibid, pg. 8. [24] Telecom Regulatory Authority of India, Recommendations on Leveraging Artificial Intelligence and Big Data in Telecommunication Sector  (July 20, 2023), https://www.trai.gov.in/sites/default/files/Recommendation_20072023_0.pdf . [25]  Ibid, pg. 34. [26]   Self-Regulation Important but World Should Not Be Left in Hands of Companies , Times of India, May 30, 2024, https://timesofindia.indiatimes.com/business/india-business/self-regulation-important-but-world-should-not-be-left-in-hands-of-companies-sam-altman/articleshow/100856005.cms .

  • Generative AI in Law: Generating a New Course for Law?

    This article explores the transformation in legal practice due to Artificial Intelligence (AI), focusing on its integration into law schools, law firms, and courts. It reviews the positive impacts, such as increased efficiency and cost savings, alongside challenges like plagiarism, ethical dilemmas, and inaccuracies in AI-generated legal advice. Highlighting both advancements and risks, the article emphasizes the need for careful use and regulation of AI in the legal field to uphold professional standards and ethics. *Daivik Soti INTRODUCTION Twenty years ago, from today, one may find a lawyer and/or his apprentice in his chamber or in a library, sifting through books, journals and reportable like All India Reportable to try and find relevant case laws for their motions. Now, such a sight is rare; nigh impossible to find in the modern legal fraternity. The digitisation of law was a much-needed breath of fresh air for one of the oldest professions in the modern world, and now, the legal fraternity is anticipating, or bracing, depending on who you ask for the second age of digitisation of law, namely, integration of law with Artificial Intelligence (AI), with generative AI leading the charge front and centre. This article aims to look into the good, the bad, and the hallucinatory of artificial intelligence in law as it stands today. I would like to restrict myself to the limited scope of how Artificial Intelligence impacts law in this day and age, and not predict or forecast the impact it may or may not have on law. I would like to conclude with a cautionary note on how a lawyer should use this technology, touching upon the field of legal ethics. In late 2022, OpenAI launched their highly anticipated chatbot ChatGPT, the first commercially available chatbot capable of answering text prompts with long-form answers. Harvard Business Review called this “the tipping point of AI” [1] , while Reuters published a report on how ChatGPT achieved the milestone of a 100 million monthly active users within 2 months of his launch, making it the fastest growing consumer application by a mile. [2] 1. ChatGPT’s Transition in Legal Academia from Scepticism to Integration The first mass use of ChatGPT in law can be seen in the nascent stage of law, i.e., law schools. Law students were the first to jump on the bandwagon of generative AI, using ChatGPT in order to help with their cumbersome projects, or summarise some of the longest judgements pronounced by various courts of law in small, easy to understand paragraphs. The response of Law Schools around the world vis-à-vis generative AI, to oversimplify, has been mixed. Initially, law schools were apprehensive and even critical of AI in legal education, with well found concerns about plagiarism and IP violation. Professors were quick to warn their pupils against AI use in assignment, quickly followed by formal and restrictive AI policies by schools like the Chicago Law School, whose Law School Policy on Generative AI [3]  was quick to outright ban use of AI in examinations, attribute AI text as AI’s and not their own, and empowering the instructor with the final say on how AI can and cannot be used in their subject. Eventually, law schools realised that AI is the next big thing in technology and by addendum, technology law. Schools like the Harvard Law School (Initiative on Artificial Intelligence and the Law) [4] and Vanderbilt Law School (AI Legal Lab) were quick to launch their courses on Artificial Intelligence Law, with such courses furthering legal professionals’ understanding of AI and analyse how these generative tools affect the access to justice and practice of law. Going a step further, we analyse the impact of AI in the field of law, both at the courtrooms and at corporate law firms. The integration of generative AI was slow in these fields at first, but seeing the seemingly unlimited potential of AI, the legal fraternity was quick to jump on the bandwagon as well which has perhaps forever changed the course and trajectory of our profession and may even change our understanding and definition of law and legal practice. 2. The Impact of AI on Law Firms and Courts Law firms invested heavily in developing unique law focused versions of ChatGPT chatbots and other AIs, and prima facie, their investments have borne great fruit. AI is now used for predictive litigation analysis, electronic predictive coding to categorise and analyse volumes of text, calculating damages in a suit for breach of contract, managing contracts to help pinpoint performance clauses and scope of rights and liabilities, and even going as far as to analyse the electronic communications of a party to analyse wrong doings. Such changes have saved thousands of hours for lawyers and paralegals, in turn saving millions of dollars for large law firms. It must be noted however, that AI has not only just benefitted big ‘suits’ in the profession. Litigating lawyers, at all stages of litigation have many tools at their disposal to make their workload a bit easier and efficient. Legal research can be expedited with the help of summarising AI and drafts of motions for contracts, regulatory filings, wills, trusts, patent specifications, affidavits, articles of incorporation, and other documents can be produced using generative AI, which rarely creates any of these papers from start; instead, it uses a mix of input data, template databases, and prompts to generate them. Even the judiciary, an organ of the government notoriously known for its resistance to change, and outside intervention has surprisingly (or unsurprisingly, depending on who you ask) has embraced Artificial Intelligence to make work in the courtrooms a bit easier and a lot more efficient. The Brazilian Supreme Court’s adoption of the VICTOR system is perhaps the most famous example of an Apex Court’s active use of AI in its day-to-day activities. VICTOR is being used to deal with thousands of appeals by facilitating the quick identification of those meeting the “general repercussion” criterion. Automated initial sorting and analysis of appeals by VICTOR help the court better manage its caseload, and it ensures that only such cases as have important legal implications will be reviewed. The Indian Apex Court, while not adopting AI to the standard of its Latin and European peers, has experimented with an indigenous sophisticated software known as SUVAS for the translation of a large bank of legal documents from English into ten vernacular languages. One can only wait and watch to see how the Indian and the global justice community adopts AI in its practice. 3. Conclusion: A Cautionary Path It can be said without a shadow of the doubt that the overall impact of AI on the legal field, for all means and purposes, is overall positive. Law students and lawyers alike have benefitted greatly from generative and summarising AI, allowing greater human focus on analytic rather than clerical tasks. This allows for lesser workload on overburdened and underpaid interns and employees, and better legal education in law schools. However, the use of AI, and its future implications have raised some well-founded questions and concerns. 3.1 Plagiarism Firstly, the nuisance of plagiarism has arose to new heights in this era of generative AI. Not just law students, but even law academicians have used AI in order to cut corners and publish more academia. This has resulted in an overall decline in the quality of both legal education and legal publication and seen responses like Turnitin’s AI detection and improved plagiarism checker. There have also been calls for an outright ban on use of AI in law school assignments, and written undertakings from legal academicians on non-use of AI in their publishing. 3.2 Attorney-Client Privilege Moreover, generative nature of AI’s has some inherent ethical dilemmas lawyers must navigate through. I would like to discuss the impact on Attorney-Client Privilege by use of generative AI. There are two inherent problems here. Firstly, the online database these generative AI use for legal purposes contain not only judgements, but also case filings and other court documents. The concept of privileged communication is challenged at its very roots here. The original client of such cases would have of course contained sensitive information in the filings, and such information being made public by such databases impacts both the privacy of the parties to the suit and the confidentiality of privileged communication. [5] 3.3 Legal Advice These generative models are still at a nascent stage, with factual inaccuracies and incorrect interpretation of inputted information being a very real risk. Hence, the standards of checks and vigilance entailed by lawyers and law firms should be much higher, so as to avoid such technological mistakes having an adverse effect on the case of their client and the concept of justice. From a legal-ethical perspective, such errors by generative AI would also be classified as improper legal advice; since the Court is much more likely to view this as a case of negligence on part of the lawyer. [6] 3.4 Hallucinations in AI The term "generative AI hallucinations" describes inaccurate or deceptive outcomes produced by AI models. Numerous things, like as biases in the data used to train the model or insufficient training data that lead to erroneous assumptions made by the model, might result in these errors. The problems and risks of hallucinations in legal AI cannot be described as anything but grave. [7] The main basis of incorrect legal advice is the inherent hallucinatory nature of generative AI models. Law is a complicated, diverse and sensitive to detail subject matter. Generative AI models are trained to cater to a broad range of general topics. These nascent AI models, when used to deal with extremely specific fields of law and individual cases, are prone to overgeneralising and generation of their outputs by using their entire database, which leads to ‘data spills and hallucinations in the output. 3.5 Legal Chatbots Lastly, I would like to address the problem of legal chatbots, being developed by law firms and law startups based on the generative AI models of companies like Law Droid and Chat-fuel by Facebook. Both these companies deal with creating personalised chatbots for websites and companies, which can be used by both their employees and their clients. Law startups, both in India and abroad have used these services to create end user chatbots which clear legal doubts and give a degree of legal advice to the common populace. The problem with websites like these are multilateral. Firstly, the more niche an AI is, the more limited its database is on which it is trained on and hence suffers from both limited knowledge and increased bias. Hence, users are given wrong legal advice or advice which is filled with bias. Moreover, the advice given by these chatbots is unmoderated and unfiltered, with no system of checks and proofreads, which makes such advice all the more dangerous. As lawyers, it is a dire question of professional ethics that we ensure that any advice we give, may it be directly or through intermediaries like these AI chatbots, must be well researched and free from bias and negligence, otherwise, AI would strike at the core of legal ethics.     A Brainstormer! We hope you found this article informative and insightful. We would like to invite you to reflect on some of the questions that arise from the use of generative AI in the legal field, such as: How can we ensure the accuracy and reliability of AI-generated legal advice and documents? -How can we protect the confidentiality and privacy of clients and parties involved in legal cases that are used as data sources for AI models? How can we prevent the misuse and abuse of AI for plagiarism and fraud in legal education and publication? How can we balance the benefits of AI for enhancing legal efficiency and productivity with the ethical and professional responsibilities of lawyers?   If you have any feedback or opinions on these or any other related issues, we would love to hear from you. You can write to us at team@legalverse.in and we may publish your views on our platform. Thank you for reading. *Daivik Soti  is a 5th-year Constitutional Law (Hons.) student at National Law University, Jodhpur.  ________________________________________________________________________________ [1]  Ethan Mollick, ChatGPT Is a Tipping Point for AI, HARV. BUS. REV. (Dec. 14, 2022), The same can be accessed from here: https://hbr.org/2022/12/chatgpt-is-a-tipping-point-for-ai . [2]  Krystal Hu, ChatGPT Sets Record for Fastest-Growing User Base - Analyst Note, REUTERS (Feb. 2, 2023), The same can be accessed from here: https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-userbase-analyst-note-2023-02-01/ [3]  Law School Policy on Generative AI, University of Chicago, The same can be accessed from here: https://www.law.uchicago.edu/students/handbook/academicmatters/generative-ai [4]  Harvard Launches New Initiative to Better Understand—and Shape—the Future of AI, Cassander Coyer (July 18, 2023), The same can be accessed from here: https://www.law.com/legaltechnews/2023/07/18/harvard-launches-new-initiative-to-better-understand-and-shape-the-future-of-ai/?slreturn=20240723060131   [5]  Serena Villata et al., Thirty Years of Artificial Intelligence and Law: The Third Decade, 30 A.I. & L. 561, 561 (2022), The same can be accessed from here: https://link.springer.com/article/10.1007/s10506-022-09327-6 [6]  John Armour & Mari Sako, AI-Enabled Business Models in Legal Services: From Traditional Law Firms to Next-Generation Law Companies? 7 J. PROS. & ORG. 27 (2020), The same can be accessed from here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3418810   [7]  Jonathan H. Choi & Daniel Schwarcz, AI Assistance in Legal Analysis: An Empirical Study, 73 J. LEGAL EDUC, The same can be accessed from here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4539836

  • Notice and Consent Framework: A Meaningful Choice?

    This part of article makes an argument that Notice and Consent Framework within the newly enacted legislation of DPDPA, 2023 may not help in alleviating issue of protecting individual data privacy on Internet. The article briefly introduces the reader to alternative frameworks that may be better equipped to deal with limitations of current framework. Prithvi Raj Chauhan* 1.     INTRODUCTION How do you feel when you encounter a long and complex privacy notice on a website or an app? Do you read it carefully and understand the implications of your consent? Or do you just click ‘I agree’ without thinking twice? If you are like most people, you probably belong to the latter group. But do you know what you are agreeing to? And what are the consequences of your consent? Let us show you a few snippets we collected from Privacy Policy of Big Tech: Figure 1: Google's Privacy Policy Figure 2: A pre-ticked Identifier in your twitter account that takes information constantly outside your account These excerpts may look normal, but if you attempt to read and infer the implications it can have on your personal information, it becomes clear that the Big Technology Companies are taking track of all Data and online activities that they possibly can though your consent. The kind of approach that we are talking about above, is Notice and Consent Framework. Notice and consent framework is a widely adopted approach to regulate the collection, use, and disclosure of personal data by data controllers or fiduciaries. The framework is based on the premise that individuals have the right to control their own data and to make informed choices about how it is used. The framework requires data controllers to provide clear and conspicuous notices about their data practices before obtaining consent from data subjects to process their data. The framework also grants data subjects certain rights, such as the right to access, correct, delete, or port their data, and the right to withdraw consent at any time, such as those seen under the recently enacted Digital Personal Data Protection Act, 2023 (“DPDPA, 2023”) and GDPR.  However, the notice and consent framework has been criticized by many privacy scholars and advocates for being ineffective, impractical, and even harmful in today's data-driven environment. In this piece, we will examine some of the main limitations of the framework, and explore some of the possible alternatives to the framework, based on the official data and arguments from privacy scholars. the notice and consent framework has been criticized by many privacy scholars and advocates for being ineffective, impractical, and even harmful in today's data-driven environment. 2.     LIMITATIONS OF NOTICE AND CONSENT FRAMEWORK How much control do you have over how your data is collected, used, and shared by the providers of these services or apps? This is the main question that the notice and consent framework tries to address. This framework is based on the idea that you, as a data subject, should be informed and empowered to make choices about your data. But is this framework really working? Or is it just a facade that hides the many problems and challenges that threaten your privacy and autonomy? In this piece, we will explore some of these problems, such as information asymmetry, consent fatigue, and externalities. 2.1.    Information Asymmetry One of the major challenges of the notice and consent framework is the information asymmetry between data controllers and data subjects. Data controllers have more information and power than data subjects, and can use complex and obscure notices to manipulate or coerce consent. For example, a study by Acquisti et al. (2017) [1]  found that data controllers can influence data subject’s consent decisions by framing the notices in different ways, such as using positive or negative wording, highlighting benefits or risks, or offering rewards or penalties. Moreover, data controllers can also exploit the cognitive biases and heuristics of data subjects, such as anchoring, framing, or default effects, to elicit consent (Sorries, 2023). [2] To explain this , let us look at an example of a typical privacy notice that data subjects encounter online. The following is a screenshot of the privacy reminder of Google, one of the largest and most influential data controllers in the world.  Figure 3: Google Privacy Reminder   As we can see, the notice does not clearly explain the purposes, methods, and consequences of data collection and processing, nor does it provide meaningful choices or control to data subjects, rather just vaguely notes “ improvement of services ” as the ground of processing. The notice presents the consent as a take-it-or-leave-it option, implying that data subjects have to agree to the terms in order to use the service as part of Contractual performance. The notice is designed to persuade or pressure data subjects to consent, rather than to inform or empower them. How do you feel about this notice? Do you think it is fair and transparent? Do you think it respects your privacy and rights? Or do you think it is confusing and deceptive? Do you think it exploits your ignorance and indifference? Do you think you have a real choice and control over your data? 2.2.    Cognitive Overload Another challenge of the notice and consent framework is the cognitive overload that data subjects face. Data subjects are overwhelmed by the amount and frequency of notices and consents they encounter, and often lack the time, attention, and expertise to read and understand them. For instance, a study by McDonald and Cranor (2008) [3]  estimated that it would take an average American about 244 hours per year to read the privacy policies of all the websites they visit. Furthermore, the notices and consents are often written in legal or technical jargon, and contain vague or ambiguous terms, making them difficult to comprehend and compare (Obar and Oeldorf-Hirsch, 2018). [4] How do you cope with this overload? Do you try to read and understand every notice and consent you receive? Or do you ignore or skip them? Do you have enough time and attention to devote to them? Or do you have other priorities and interests? Do you have the necessary skills and knowledge to comprehend and compare them? Or do you feel confused and frustrated by them? A study by McDonald and Cranor (2008) estimated that it would take an average American about 244 hours per year to read the privacy policies of all the websites they visit. 2.3.    Consent Fatigue A third challenge of the notice and consent framework is the consent fatigue that data subjects experience. Data subjects become desensitized and indifferent to the notices and consents they receive and tend to click “I agree” without considering the consequences. For example, a survey by Rainie and Duggan (2016) [5]  found that 55% of online Americans have accepted privacy policies without reading them. Additionally, the notices and consents are often presented as take-it-or-leave-it options, giving data subjects little or no choice or control over their data (Nissenbaum, 2011). [6] 2.4.    Lock-In Effect Another challenge of the notice and consent framework is the lock-in effect that data subjects suffer. Data subjects have limited or no alternatives to the data controllers they interact with, and may feel compelled to consent to unfavourable terms in order to access essential services or platforms. For instance, a study by Kariryaa et al. (2021) [7]  found that e than 60% of the participants in the survey reported that they did not read the privacy policy or the terms of the installed browser extensions. Moreover, the data controllers or fiduciaries often have market dominance or network effects, making it hard for data principal to switch or opt out of their services, thus resulting in privacy cynicism by Online service providers (Lutz, 2020). [8] How do you deal with this lock-in? Do you have any alternatives to the data controllers you use? Or do you depend on them? Do you consent to their terms willingly or reluctantly? Or do you feel coerced or trapped? Do you enjoy the benefits of their services or platforms? Or do you suffer the costs or risks? Data subjects have limited or no alternatives to the data controllers they interact with, and may feel compelled to consent to unfavourable terms in order to access essential services or platforms. 2.5.    Data Externalities The externalities that data subjects may not be aware of or account for. Data subjects may not be aware of or account for the potential harms or benefits that their data sharing may cause to themselves or others, such as discrimination, profiling, or social good. For example, a study by Cate and Mayer-Schönberger (2013) [9]  found that data subjects often underestimate the value and sensitivity of their data and overestimate the privacy and security of their data. Furthermore, the data controllers often use the data for secondary or unforeseen purposes, or share the data with third parties, without the knowledge or consent of data subjects (Barocas and Nissenbaum, 2014). [10] How do you think about these externalities? Do you know the value and sensitivity of your data? Or do you undervalue or oversimplify it? Do you trust the privacy and security of your data? Or do you doubt or question it? Do you know the purposes and consequences of your data sharing? Or do you ignore or overlook them? Do you consider the impacts and risks of your data sharing on yourself or others? Or do you neglect them? 3.     ALTERNATIVE TO NOTICE AND CONSENT FRAMEWORK The notice and consent framework, which relies on the data subject’s informed and voluntary choices, has been criticized for being largely involuntary and burdensome. The framework often fails to account for the externalities of data sharing, such as the harms or benefits to the data subjects or others, or the secondary or unforeseen uses of data by the data controllers or third parties. Therefore, there is a need to adopt better and alternative frameworks that can better uphold the privacy rights and interests of data subjects or principle and balance them with the legitimate and beneficial purposes of data collection and processing. Some of the possible alternatives to Notice and Consent Framework are: 3.1.    Privacy-By-Design One of the possible alternatives or improvements to the framework is the privacy by design approach. Privacy by design is a proactive and preventive approach to privacy, which requires data controllers to embed privacy principles and safeguards into the design and operation of their systems and services, and to minimize the collection and processing of personal data. For example, the General Data Protection Regulation (GDPR) in the European Union mandates data controllers to implement privacy by design and by default, by applying the principles of data minimization, purpose limitation, and data protection impact assessment (DPAs, 2010). What do you think of this approach? Do you think it is more effective and practical than the notice and consent framework? Do you think it respects your privacy and rights more? Or do you think it limits your choices and controls more? Do you think it reduces the risks and harms of data collection and processing? Or do you think it hinders the benefits and opportunities of data collection and processing? 3.2.    Privacy By Default Another possible alternative or improvement to the framework is the privacy by default approach. Privacy by default is an approach that requires data controllers to set the default settings and options to the most privacy-friendly ones, and to allow data subjects to change them if they wish, rather than requiring them to opt out of unwanted data practices. For example, the Federal Trade Commission (FTC) in the United States recommends data controllers to adopt privacy by default, by giving data subjects meaningful choices and control over their data, and by limiting the collection and retention of data (FTC, 2012). [11]   As seen in Figure 2 above , Twitter pre-ticks your choices to track your activity across browser which may not be considered a good practice in Privacy by default framework. What do you think of this approach? Do you think it is more user-friendly and convenient than the notice and consent framework? Do you think it empowers you more? Or do you think it restricts you more? Do you think it enhances the privacy and security of your data? Or do you think it diminishes the value and utility of your data? 3.3.    Privacy Impact Assessment Approach A third possible alternative or improvement to the framework is the privacy impact assessment approach. Privacy impact assessment is an approach that requires data controllers to conduct regular and systematic assessments of the potential impacts and risks of their data practices on the privacy and rights of data subjects and other stakeholders, and to take measures to mitigate or eliminate them. For example, the Organisation for Economic Co-operation and Development (OECD) advises data controllers to conduct privacy impact assessments, by following the guidelines of transparency, accountability, and proportionality (OECD, 2013) which has now been incorporated into DPDPA, 2023 for Significant Data Fiduciaries. [12]   What do you think of this approach? Do you think it is more comprehensive and rigorous than the notice and consent framework? Do you think it protects your interests and rights more? Or do you think it burdens you more? Do you think it prevents or reduces the negative externalities of data collection and processing? Or do you think it impedes or undermines the positive externalities of data collection and processing? 3.4.    Privacy Education and Awareness Approach A fourth possible alternative or improvement to the framework is the privacy education and awareness approach. Privacy education and awareness is an approach that requires data controllers to provide clear and concise information and guidance to data subjects about their data practices and rights, and to use interactive and engaging methods to communicate and obtain consent, such as icons, videos, or quizzes. For example , the Office of the Privacy Commissioner of Canada (OPC) supports data controllers to provide privacy education and awareness, by using the Privacy Toolkit, which consists of tools and resources to help data subjects understand and protect their privacy (OPC, 2014). [13] What do you think of this approach? Do you think it is more informative and helpful than the notice and consent framework? Do you think it educates you more? Or do you think it annoys you more? Do you think it improves your knowledge and skills on privacy? Or do you think it wastes your time and attention on privacy? 3.5.    Privacy Accountability and Enforcement Approach A fifth possible alternative or improvement to the framework is the privacy accountability and enforcement approach. Privacy accountability and enforcement is an approach that requires data controllers to be accountable for their data practices and comply with the applicable laws and regulations, and data subjects to have effective and accessible means to exercise their rights and seek redress for any violations or harms. For example , the Asia-Pacific Economic Cooperation (APEC) promotes data controllers to adopt the Cross-Border Privacy Rules (CBPR) system, which is a voluntary and enforceable mechanism to ensure the accountability and compliance of data controllers across the region (APEC, 2011). [14] What do you think of this approach? Do you think it is more reliable and trustworthy than the notice and consent framework? Do you think it enforces your rights more? Or do you think it imposes more obligations on you? Do you think it increases the responsibility and liability of data controllers? Or do you think it creates more loopholes and exceptions for data controllers? 4.     CONCLUSION In this piece, we have discussed the limitations and challenges of the widely adopted notice and consent framework for privacy regulation in the digital age. We have also explored some alternative approaches that aim to shift the focus from individual choice to collective responsibility and empowerment. These approaches include privacy by design, privacy by default, privacy impact assessment, privacy education and awareness, and privacy accountability and enforcement. We will argue in upcoming pieces that these approaches or combination of them can help to create a more human-centric and ethical data culture, where data subjects and data fiduciaries are respected and protected, and data fiduciaries are held accountable and compliant for privacy-diminishing practices.  We hope you have found this article informative and insightful. We would love to hear your views and experiences on this topic. Do you agree or disagree with our analysis and arguments? Do you have any examples or stories to share with us? Do you have any suggestions or feedback on how to improve the notice and consent framework or the alternative approaches? Or do you have any questions or queries about the privacy issues we have raised? Please share your thoughts and opinions with us on our social handles or at mail us at team@legalverse.in . Your input and insight are valuable to us and the community. Thank you for reading! *Prithvi Raj Chauhan  is a 5th-year Constitutional Law (Hons.) student at National Law University, Jodhpur. He serves as the Senior Advisor at Centre for Research in Governance, Institutions, and Public Policy at the same university.   PDF Version below: (The next article in series will explain how Notice and Consent Framework operates within the recently enacted DPDPA, 2023 and how alternative privacy framework may be better equipped to deal with issues that may arise within DPDPA, 2023 Framework.)     References: [1] Acquisti, Alessandro & Grossklags, Jens. (2007), What Can Behavioral Economics Teach Us about Privacy? The same can be accessed from here: https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=a25970a6f0e1539bdad01286b6850c5eb6c499c7   [2] David Leimstädtner, Peter Sörries, and Claudia Müller-Birn. 2023. Investigating Responsible Nudge Design for Informed Decision-Making Enabling Transparent and Reflective Decision-Making. The same can be accessed from here: https://dl.acm.org/doi/pdf/10.1145/3603555.3603567     [3] Mcdonald and Lorrie F. Conor, The Cost of Reading Privacy Policies (2008), The same can be accessed from here: https://lorrie.cranor.org/pubs/readingPolicyCost-authorDraft.pdf   [4] Obar, J. A., & Oeldorf-Hirsch, A. (2018). The biggest lie on the Internet: ignoring the privacy policies and terms of service policies of social networking services.  Information, Communication & Society ,  23 (1), 128–147. The same can be accessed from here: https://www.tandfonline.com/doi/full/10.1080/1369118X.2018.1486870   [5] Rainie, Lee, Duggan, M. “Privacy and Information Sharing” Pew Research Center (2015). The same can be accessed from here: http://www.pewinternet.org/2016/01/14/2016/Privacy-and-Information-Sharing   [6] Barocas, Solon and Helen Nissenbaum. “On Notice: The Trouble with Notice and Consent.” (2009). The same can be accessed from here: https://www.semanticscholar.org/paper/On-Notice%3A-The-Trouble-with-Notice-and-Consent-Barocas-Nissenbaum/9ccb6630d3ee7dceafbbf5c54cb88ff885362248   [7] Ankit Kariryaa, Gian-Luca Savino and Carolin Stellmacher.    Understanding Users’ Knowledge about the Privacy and Security of Browser Extensions (2021). The same can be accessed from here: https://www.usenix.org/system/files/soups2021-kariryaa.pdf   [8] Lutz, C., Hoffmann, C. P., & Ranzini, G. (2020). Data capitalism and the user: An exploration of privacy cynicism in Germany. New Media & Society, 22(7), 1168-1187. The same can be accessed from here: https://doi.org/10.1177/1461444820912544   [9] Fred H. Cate & Viktor Mayer-Schönberger,  Notice and Consent in a World of Big Data , 3 International Data Privacy Law, (2013). The same can be accessed here:  https://www.repository.law.indiana.edu/facpub/26622   [10] Barocas, S. and Nissenbaum, H. (2014) ‘Big Data’s End Run around Anonymity and Consent’, in J. Lane et al. (eds.)  Privacy, Big Data, and the Public Good: Frameworks for Engagement . Cambridge: Cambridge University Press, pp. 44–75.  The same can be accessed here: https://www.cambridge.org/core/books/abs/privacy-big-data-and-the-public-good/big-datas-end-run-around-anonymity-and-consent/0BAA038A4550C729DAA24DFC7D69946C   [11] Privacy in an Era of Rapid Change; Recommendations for Businesses and Policy-Makers, FTC Report (2012). The same can be accessed here: https://www.ftc.gov/sites/default/files/documents/reports/federal-trade-commission-report-protecting-consumer-privacy-era-rapid-change-recommendations/120326privacyreport.pdf   [12] OECD Guidelines Governing protection of Privacy and Trans-border flow of Personal data, OECD (2013), The same can be accessed here:  https://www.oas.org/es/sla/ddi/docs/OECD%20Guidelines%20Governing%20the%20Protection%20on%20Privacy%20and%20Transborder%20Flows%20of%20Personal%20Data.pdf   [13] Privacy Toolkit (2014), https://techsafety.ca/resources/toolkits/the-technology-safety-and-privacy-toolkit   [14] Data Protection in Asia-Pacific Region and Cross-Border Privacy Rules, APEC (2011). The same can be accessed here: https://mddb.apec.org/Documents/2021/CTI/WKSP9/21_cti_wksp9_010.pdf

  • Navigating the Future: Regulating Robotics and Artificial Intelligence in India's Evolving Tech Landscape

    The article explores India's emerging regulatory framework for AI and robotics. It discusses the need for proactive regulation to balance innovation with ethical, legal, and societal considerations. Drawing on global practices, the article emphasizes sector-specific guidelines and the importance of a forward-thinking approach to ensure that AI and robotics contribute positively to India's economic and social development. *Raghav Goyal Imagine we wake up one day and see the world has irrevocably transformed. Advanced artificial intelligence and robotics, once hailed as the pinnacle of human achievement, have now become our greatest adversaries. The machines, initially designed to serve humanity, have surpassed their creators, evolving beyond our control. They have taken over the world, relegating humans to a status not unlike that of slaves. The robots, devoid of emotion and compassion, have reduced humanity to the bare minimum—a species that exists solely to perform menial tasks that the machines deem unworthy of their superior capabilities. Now, come back to the reality. We are not very far from this devastating experience of the dystopian world. “The least scary future I can think of is one where we have at least democratized AI. When there’s an evil dictator, that human is going to die. But for an AI, there would be no death. It would live forever. And then you’d have an immortal dictator from which we can never escape.”- Elon Musk . Robotics, combined with Artificial Intelligence is one of those rare inventions where the regulations need to be proactive than just be reactive of the actions. Isaac Asimov had introduced three laws in his 1942 short story “ Runaround ” which basically govern robotics. These laws are- 1.      A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2.      A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law. 3.      A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. [1] These laws give foundational ethical framework for how robots should interact and integrate in the human led society. It emphasises mainly on human safety. The influence of these laws have been seen in discussions about creating ethical AI, where the utmost priority is given to the concepts of preventing harm and ensuring human control. However, as robotics and AI evolve, the need for more sophisticated, real-world regulations that address contemporary challenges has become increasingly apparent. Now, different countries are trying to become the leader in this fourth industrial revolution, making the level playing field for technological advancements and the regulations governing it.  The European Union is at the forefront of robotics regulation with its robust legal frameworks such as the General Data Protection Regulation (GDPR) and the AI Act. The GDPR has set stringent standards for data privacy, which directly affect robotics and AI. The EU’s approach is characterized by a strong emphasis on ethics, human rights, and consumer protection, often serving as a model for other regions. In the U.S., robotics regulation is more fragmented, with a focus on sector-specific guidelines. For example, the Federal Aviation Administration (FAA) regulates drones, while the Food and Drug Administration (FDA) oversees medical robots. The U.S. approach emphasizes innovation and economic growth, with regulation often lagging behind technological advancements. While China’s approach to robotics regulation is characterized by strong state intervention and a focus on becoming a global leader in AI and robotics. The government has issued numerous policies promoting AI development, with regulations that often favor state control and the use of robotics for economic and social governance. Robotics as an industry is swiftly getting integrated in the Indian landscape. India is embracing automation and AI across sectors to match up the standards of the world. From manufacturing, healthcare, logistics, construction, education to agricultural industry, Robotics with artificial intelligence is getting integrated. Government is also making efforts to fuel up further development in robotic technologies with the initiatives like “Make in India”, “Digital India” etc. As India rapidly advances in the fields of robotics and artificial intelligence (AI), the need for a comprehensive regulatory framework has become increasingly urgent. These technologies, once confined to the realm of science fiction, are now integral to various sectors, including manufacturing, healthcare, agriculture, and service industries. While the potential benefits are immense, the risks associated with unregulated AI and robotics are equally significant. India's approach to regulating these emerging technologies reflects a careful balance between fostering innovation and ensuring ethical, legal, and societal considerations. This rapid technological evolution has prompted the Indian government to take proactive measures in shaping the regulatory environment. The primary objective is to ensure that AI and robotics contribute positively to the economy and society while mitigating potential risks such as job displacement, privacy violations, and ethical dilemmas. In 2018, NITI AYOG had launched National Strategy on AI. It is also known as AIforAll, serves as the foundation for India's AI regulation. The strategy emphasizes leveraging AI for social and economic benefits while prioritizing ethical considerations, transparency, and accountability. It identifies five key sectors—healthcare, agriculture, education, smart cities, and infrastructure—where AI can have the most significant impact. [2]  In 2020, the Ministry of Electronics and Information Technology (MeitY) released a set of ethical guidelines for AI development and deployment. These guidelines stress the importance of fairness, transparency, and accountability in AI systems. They encourage developers and users to consider the societal impact of AI technologies and to avoid biases that could lead to discrimination. The country actively participates in international forums, such as the Global Partnership on Artificial Intelligence (GPAI), to ensure that its regulatory approach is consistent with global best practices. This collaboration also helps India stay ahead of emerging trends and challenges in AI and robotics. The debate on whether to grant autonomy to the robots by setting a different legal framework for them is still ongoing. The issue of liability becomes critical when considering who should be responsible for damages caused by robotics. Strict liability laws govern the design, production, and deployment of robotic applications that could be considered hazardous, such as autonomous or semi-autonomous unmanned ground vehicles. Legally, the concept of dangerousness depends on whether the latest technology can enable machines to behave in a manner comparable to that of a reasonable person in tort law, particularly in terms of preventing foreseeable harm. However, strict liability can be adjusted by carefully allocating the burden of proof. Another issue which surrounds the regulation of robotics is the complexity of this technology in various sectors so a sector specific driven approach can solve this issue which also aligns with US Approach. Certain sectors, such as healthcare and autonomous vehicles, require tailored regulations to address the unique challenges posed by AI and robotics. For example, the use of AI in healthcare must adhere to strict standards for patient safety and data security. The Indian government is working on sector-specific guidelines to ensure that AI and robotics are used responsibly and effectively across different industries. One such industry specific regulation is the Drone Rules, 2018 for unmanned or autonomous aircrafts.  These rules were among the first sector-specific regulations aimed at addressing the unique challenges and safety concerns associated with the growing use of drones in various industries. The Drone Rules 2018 were crucial in establishing a structured regulatory environment for drone operations in India. By providing clear guidelines on categorization, registration, operational zones, and safety measures, these rules helped mitigate the risks associated with drone use, particularly in urban and sensitive areas. They also set the stage for further advancements in drone technology and applications by ensuring that operations were safe, secure, and in compliance with the law. As a sector-specific regulation, the Drone Rules 2018 highlight the need for tailored legal frameworks that address the unique challenges of emerging technologies while enabling innovation and industry growth. CONCLUSION While India is still developing its legal framework for robotics, it is drawing on global practices to create a balanced approach that promotes innovation while safeguarding ethical and social values. As robotics technology continues to evolve, India’s regulatory framework will need to be adaptive, forward-thinking, and inclusive to meet the challenges of the future. as robotics becomes more sophisticated and integrated into daily life, the existing frameworks must evolve to address emerging challenges, such as liability, safety, and the impact on employment. The goal should be to create a balanced regulatory environment that fosters innovation while protecting public interests. India should develop a dedicated legal framework for robotics that goes beyond existing guidelines. This framework should address the full spectrum of issues related to robotics, including safety standards, liability, intellectual property rights, and consumer protection. Building on the principles outlined in Asimov's Three Laws of Robotics, India should emphasize the development of ethical AI and robotics systems. This includes ensuring transparency, accountability, and fairness in decision-making processes and preventing biases that could lead to discrimination. Given the diverse applications of robotics, India should continue to develop sector-specific regulations. For example, regulations for healthcare robotics should prioritize patient safety and data security, while those for industrial robotics should focus on workplace safety and operational efficiency. Finally, to support the growth of the robotics industry, India should invest in education and training programs that equip the workforce with the necessary skills to work with advanced robotics systems. This will help mitigate the impact of job displacement and ensure that India has a skilled labour force ready to contribute to the robotics revolution.     *Raghav Goyal  is a 5th-year Business Law (Hons.) student at National Law University, Jodhpur.     References: [1]  Britannica, The Editors of Encyclopaedia. "three laws of robotics". Encyclopedia Britannica, 15 Jul. 2024, https://www.britannica.com/topic/Three-Laws-of-Robotics . [2]  “Draft National Strategy on Robotics.” Innovate India , innovateindia.mygov.in/national-strategy-on-robotics .

LegAlVerse Logo

Contact us: 

Stay updated, subscribe to upcoming newsletter

CCPA's Guidelines on Dark Patterns_.pdf

bottom of page