
or
The question “Is AI generative?” engages with both a technical and philosophical inquiry into the capabilities and classification of artificial intelligence systems, particularly those that produce novel outputs such as text, images, music, and code. At its core, generative AI refers to a class of machine learning models — such as Generative Adversarial Networks (GANs), Transformer-based models (e.g., GPT, BERT), and diffusion models — that are designed to generate new content based on learned patterns from vast datasets.
This abstract investigates whether AI can truly be considered “generative” in the creative, legal, and computational sense. It examines the mechanisms that underpin generative AI systems, noting that while these systems can produce outputs that are statistically novel and often indistinguishable from human-created content, their “generativity” is bounded by their training data and algorithmic limitations. Unlike human creativity, AI generativity lacks intentionality, consciousness, or understanding, raising philosophical debates on whether such systems are genuinely creative or merely sophisticated mimics.
From a legal standpoint, the recognition of AI as “generative” carries significant implications. Intellectual property laws, particularly copyright and patent regimes, must grapple with questions around authorship, ownership, and originality of AI-generated works. Jurisdictions differ in their treatment of AI-generated content — some rejecting copyright protection altogether without human authorship, while others consider hybrid models of joint authorship. Additionally, new regulatory frameworks such as the European Union’s AI Act are beginning to define and classify generative AI, introducing obligations for transparency, disclosure, and risk assessment.
Ethical and societal concerns further complicate the issue: generative AI can produce misleading content (e.g., deepfakes, misinformation), potentially infringe upon data privacy, and reinforce harmful biases. As AI systems increasingly assume creative and communicative roles traditionally reserved for humans, the inquiry “Is AI generative?” becomes more than a technical categorization — it is a foundational question that challenges existing legal norms, ethical paradigms, and societal structures.
This abstract concludes that while AI can be functionally described as “generative” based on output and design, its classification depends heavily on context — legal, philosophical, and technological. Recognizing the nuances of generativity is essential for shaping responsible innovation, informed governance, and ethical AI development.
If AI systems negotiate or conclude contracts autonomously, key legal questions arise:
Traditional contract law requires:
When AI is involved:
A contract generated or partially negotiated by AI is generally enforceable, if:
However, disputes may arise over:
AI is also used in Online Dispute Resolution (ODR):
As AI continues to evolve:
Key Issues: Inventorship: Indian law recognizes only natural persons as inventors.AI systems like DABUS cannot be listed as inventors under Indian law.
Section 6: Specifies that only the true and first inventor or their assignee can apply for a patent.
Implication: Inventions autonomously generated by AI are currently not patentable in India unless a human is clearly linked as the inventor.
Patentability of AI Algorithms. Section 3(k): Excludes “mathematical or business methods or a computer programme per se” from being patented. However, if AI is implemented in combination with hardware or shows a technical effect, it may be considered. Examples of accepted technical effects: enhanced speed, better security, improved user interface.
Case Insight: Though there is no Indian case law on AI inventorship, Indian Patent Office guidelines follow EPO and USPTO trends closely, and rejections have been issued for algorithm-based inventions without clear technical contributions.
Key Issues: Authorship Section 2(d) defines an author as a human – e.g., composer, artist, etc.. There is no legal recognition for AI as an author. Example:
If AI generates a song or painting, Indian law currently offers no copyright protection unless a human is linked as the author (e.g., programmer, user who input prompts). Ownership: -If AI-assisted creation is involved, the rights may lie with the person who causes the work to be created, under “employer-employee” or “commissioned work” doctrines. Training AI on Copyrighted Data. Indian law does not specifically address training AI models on protected content (e.g., books, songs). Fair use (Section 52) may apply for research and education, but commercial training of models without licensing could be deemed infringement.
AI-Generated Logos or Trade Dress: Designs generated autonomously by AI may not qualify for registration unless a human is credited. Under The Trade Marks Act, 1999, the applicant must be a legal person (individual or entity), not a machine.
NITI Aayog Reports acknowledged the lack of clarity in AI-related IPR protection, called for AI-specific amendments in IP laws to recognize and regulate AI contributions.
Parliamentary Discussions :-Discussions in the Standing Committee on IPR (2021 & 2023) urged the IP Office to revisit policies and explore frameworks accommodating AI-generated content and inventions.
Possible Legal Reforms: Recognizing AI-assisted creation through “co-authorship” models. Clarifying data usage rights for AI training under copyright law. Creating a sui generis system for AI-generated works (a new class of protection).
Ethical and Regulatory Needs: Prevent misuse of AI to infringe IP rights (e.g., deepfake trademarks, synthetic art sold without consent). Promote licensing frameworks for training AI ethically.
The IT Act is India’s primary cyber law legislation, aimed at legal recognition of electronic commerce, digital signatures, cybercrime, and online governance. While it doesn’t directly mention Artificial Intelligence (AI), several sections are indirectly applicable to AI-related technologies, especially in areas of data use, cybersecurity, and liability.
Section 43A: Holds a body corporate liable if it fails to protect sensitive personal data, causing wrongful loss/gain.
Section 72A: Punishes disclosure of personal information without consent, especially for gain.
Implication: AI developers and deployers are required to ensure data is collected, processed, and stored securely. Using personal data to train AI models without proper consent may lead to civil or criminal liability under the IT Act.
Section 66: Penalizes hacking and unauthorized access to computer systems.
Malicious AI (e.g., bots or AI viruses) could be punished under this section.
Section 66F: Covers cyber terrorism – advanced AI systems used to disrupt critical infrastructure could potentially fall here.
Example: If an AI tool is used to penetrate financial or government networks, the developer or deployer could face legal action.
Currently, Indian law does not assign legal personhood to AI systems. Thus, under the IT Act:
Liability falls on the human agents or organizations deploying the AI. Issues like algorithmic bias or autonomous decision-making lack clarity under the current legal framework.
AI systems can create, modify, or interpret digital content. Section 65B of the Indian Evidence Act (read with the IT Act) governs admissibility of electronic records. Evidence generated by AI (e.g., surveillance footage, AI-based forensic tools) can be admitted if authenticated properly.
The Digital India Act (expected to replace the IT Act) is in the drafting stage as of 2024-25. It is expected to: Address AI regulation more explicitly. Include guidelines on AI accountability, transparency, and bias mitigation. Introduce algorithmic audits and ethical AI frameworks.
While the IT Act is a useful tool in regulating AI-related risks, it lacks explicit AI governance provisions. Until new legislation arrives (like the DIA or the Digital Personal Data Protection Act, 2023), AI remains governed only indirectly through cyber, data, and liability provisions of the IT Act.
Algorithmic Bias and Manipulation: -Under the Consumer Protection Act, 2019, consumers are protected against unfair trade practices, misleading advertisements, and defective services. The rise of AI-driven systems in e-commerce and marketing has introduced several challenges:
b. E-commerce and Liability of AI Systems:- AI-powered chatbots and recommendation engines are now key components of consumer interfaces. However, these tools can sometimes provide inaccurate information, misrepresent terms, or make unauthorized commitments. The Consumer Protection (E-Commerce) Rules, 2020 hold marketplaces and sellers jointly accountable for the products or services. If an AI system provides misleading information leading to damage, liability could be extended to the platform for negligence or failure to exercise due diligence (see Rule 5(2) – liability of marketplace e-commerce entities).
Indian law enforcement agencies are increasingly using AI tools like Facial Recognition Technology (FRT) (e.g., Delhi Police’s use of FRT in protests and riots). However, this area lacks a dedicated legal framework or statutory oversight. Raises issues under Article 21 of the Constitution (Right to Privacy), affirmed in Justice K.S. Puttaswamy v. Union of India (2017), where the Supreme Court emphasized the need for legality, necessity, and proportionality in surveillance.AI-enabled surveillance without clear safeguards can amount to profiling and mass surveillance, violating privacy rights and potentially leading to chilling effects on free expression (Article 19).
b. Deepfakes and AI-generated Misinformation:-AI is being misused to create deepfakes hyper-realistic fake videos/images—which are often used for: Cyberbullying and extortion, Political misinformation, Reputation damage or defamation.
Legal Provisions: Section 66E of the IT Act: Punishes violation of privacy through capturing/publishing images of private areas without consent. Section 67 of the IT Act: Deals with the publishing or transmission of obscene material in electronic form. Indian Penal Code (IPC): Section 469 – Forgery for harming reputation, Section 500 – Defamation, Section 509 – Word, gesture, or act intended to insult modesty of a woman.
Gunjan Bhatter is a dedicated law student at IME Law College, affiliated with Chaudhary Charan Singh University. With a keen interest in legal research and global legal developments, she is also a proud member of the International Council of Jurists, London.
Gunjan Bhatter
Lex Witness Bureau
Lex Witness Bureau
For over 10 years, since its inception in 2009 as a monthly, Lex Witness has become India’s most credible platform for the legal luminaries to opine, comment and share their views. more...
Connect Us:
The Grand Masters - A Corporate Counsel Legal Best Practices Summit Series
www.grandmasters.in | 8 Years & Counting
The Real Estate & Construction Legal Summit
www.rcls.in | 8 Years & Counting
The Information Technology Legal Summit
www.itlegalsummit.com | 8 Years & Counting
The Banking & Finance Legal Summit
www.bfls.in | 8 Years & Counting
The Media, Advertising and Entertainment Legal Summit
www.maels.in | 8 Years & Counting
The Pharma Legal & Compliance Summit
www.plcs.co.in | 8 Years & Counting
We at Lex Witness strategically assist firms in reaching out to the relevant audience sets through various knowledge sharing initiatives. Here are some more info decks for you to know us better.
Copyright © 2020 Lex Witness - India's 1st Magazine on Legal & Corporate Affairs Rights of Admission Reserved