The widespread adoption of generative AI has led to an increase in AI-generated content. But can one claim authorship of a text where ChatGPT is the real creator?
Scientists confirm: This is the most effective way to get your cat’s attention, according to new research
Elderly Couple Refuses Reserved Seats—Viral Train Standoff Sparks Fiery Debate on Courtesy
Many common digital practices remain legally ambiguous for those who are not experts. Between well-established uses, gray areas of the law, and recent regulatory changes, it is challenging to determine precisely what is allowed, tolerated, or prohibited. To clarify, BDM has published a series of articles exploring the legality of these practices, with insights from attorneys specializing in digital law and intellectual property.
In this article, we explore the legality of publishing a text or article written by or with the assistance of ChatGPT, but signed by a human author. To what extent is this practice legal? To answer this question, BDM consulted with Master Alan Walter of Walter Billet Avocats, and Master Jérôme Giusti of Metalaw.
ChatGPT, the Web’s Most Prolific Writer
With the rise of generative AI, content creation has become faster and more accessible. ChatGPT can produce a structured text in seconds, ready to be published or slightly reworked. It is evident that nearly all media outlets, whether institutionally or not, now use generative AI in content creation, whether for information gathering, writing and text reformulation, or proofreading. The extent of AI use varies: some writers or journalists use AI marginally to refine content, while others almost entirely delegate the writing process to it.
In practice, these contents are often published under the name of a human author, whether an employee or an expert. This attribution aims to lend credibility to the text and align it with traditional publication logic.
Beyond the moral aspect, this approach raises a legal question: can one truly declare themselves the author of a text generated by AI?
ChatGPT Cannot Be Credited as an Author
In French law, only a natural person can create a work. Article L121-1 of the Intellectual Property Code states that “the author enjoys the right to respect for their name, quality, and work”. Therefore, “an artificial intelligence cannot be recognized as the author of a work and cannot have its written productions protected by copyright”, explains Alan Walter.
When content is generated by ChatGPT, it does not inherently have a clear legal status. By default, it can thus be published anonymously. However, a human using it can claim authorship, provided they make a significant creative contribution: rewriting, selecting elements, structuring, or enriching. “A text purely generated by AI is not protected unless the human author contributes significantly to the rewriting of the text”, confirms Jérôme Giusti.
If AI admits no copyright for its creations, then the human signing the article, using AI to generate content, might be considered the author, provided they make a sufficient creative contribution, adds Master Walter.
However, AI companies may include their own conditions. “If the company that designed the AI reserves copyright on the AI’s creations, then reproducing an article created by it will be sanctioned as if it were a human article”, warns Alan Walter. Jérôme Giusti also reminds that “some journalistic ethical charters recommend” disclosing the use of AI when publishing a text.
AI-Written Text: The Author Assumes Responsibility
In concrete terms, an author takes no legal risk by signing a text generated by ChatGPT, at least in terms of intellectual property. Indeed, if you can claim authorship of the work, you must also assume its responsibilities. “In the event of erroneous or defamatory content, the legal responsibility lies with the publication director and, failing that, the journalist, not the AI tool”, indicates Jérôme Giusti. In short, the same rules apply as for content written by a human.
Being considered the author means that the human who appropriates content created by AI will be held responsible if the content is inaccurate, illegal, misleading, or defamatory. Consequently, if the AI plagiarizes a work protected by copyright and a human signs the creation of the article, they will themselves be held responsible for this plagiarism, adds Master Alan Walter.
Careful verification of information remains crucial, especially given ChatGPT’s difficulty in correcting its hallucinations…
Jérôme Giusti, Attorney specialized in innovation and digital
Jérôme Giusti is a lawyer at the Paris bar, specialized in intellectual property and digital law. Committed to responsible innovation, he also co-directs the Jean Jaurès Foundation’s Justice Observatory.
Alan Walter, Attorney
An attorney at the Paris bar since 2006, Alan Walter practiced at several firms specialized in new technologies, such as Alain Bensoussan Avocats and Kahn & Associates, before co-founding his own firm in 2015. He also teaches at Télécom Paris and the University of Paris-Nanterre.
Similar Posts
- AI-Generated Photos in Journalism: Legal or Not?
- OpenAI Loses Landmark Copyright Case in Europe: Implications for AI Industry
- Digital Pros Reveal: How Will AI Transform Your Career by 2025?
- Adobe Unveils LLM Optimizer: Boost Your ChatGPT SEO Rankings Now!
- Meta Unveils Anti-Plagiarism Tool: Protecting Creators’ Reels from Copycats

Jordan Park writes in-depth reviews and editorial opinion pieces for Touch Reviews. With a background in UI/UX design, Jordan offers a unique perspective on device usability and user experience across smartphones, tablets, and mobile software.