What is Shadow AI? Why is it considered a risky practice for your business, and what guidelines should you follow? Let’s delve into these questions with AI expert Thomas Gouritin.
Scientists confirm: This is the most effective way to get your cat’s attention, according to new research
Elderly Couple Refuses Reserved Seats—Viral Train Standoff Sparks Fiery Debate on Courtesy
As the use of generative AI tools becomes increasingly widespread, their application within businesses is also gaining ground, according to the findings of the 2nd edition of our BDM survey on AI. By 2025, the deployment of generative AI at work is recommended by 48% of our respondents (out of 1,034 digital professionals), up from 30% just a year ago. However, a growing trend known as Shadow AI is emerging. To learn more about its risks and the best practices to implement internally, we spoke with Thomas Gouritin, an AI expert at Smart Tribune, a chatbot designer, and speaker.
Thomas Gouritin, AI Expert at Smart Tribune
Why You Should Never Reheat These Foods in the Microwave – The Hidden Dangers Experts Warn About
I tried the top 5 guard dogs—here’s what makes these breeds the ultimate protectors
I have been working on conversational chatbot topics for about a decade, long before the advent of ChatGPT. From editorial work to tech, I engage with these topics on a daily basis. I joined Smart Tribune, where I assist major companies with AI and business knowledge management, especially on trust issues.
Shadow AI: A Dual Definition, Presenting Significant Opportunities for Businesses
Our expert primarily highlights the negative aspect of Shadow AI, which is “the use of public AI tools by professionals in their daily jobs to assist them in their tasks, but not always in a secure manner for the company, and without the company’s knowledge of such use.” This last point is indeed problematic, although several reasons can explain this type of practice internally.
The main reason, which has been the key to ChatGPT’s success from the start, is its very natural and easy-to-use interface that does not require coding skills. It’s this ease of use and the illusion that everything is just a sentence (or a prompt) away, which is not always the case.
There are also many professions that involve time-consuming tasks that do not excite us. Employees then look for shortcuts to complete these tasks more quickly and spend as little time on them as possible.
Large companies have taken this issue seriously, implementing programs to ensure proper use of generative AI according to business needs, as observed by Thomas Gouritin in the field. The expert recommends that companies should “truly cultivate an AI culture,” rather than “disguised coercion.” He adds: “It’s not just about imposing bans on certain tools, but rather about guiding employees on this path.” In any case, the principle that employees will inevitably use generative AI if it saves them time should be assumed, the expert advises.
Give them the means to understand how it works and what it can do, rather than foolishly banning it, since we know very well that these are tools that will be used because they provide many services. The whole challenge is to find the right balance to seize opportunities while being cautious about how it’s used.
The Risks Associated with Shadow AI Due to Lack of Corporate Governance
While it might be tempting to use tools like ChatGPT, Claude, Gemini, or Perplexity for those tedious, low-value tasks, the way professionals utilize them can expose their companies to serious risks. While it’s not about creating widespread panic, Thomas Gouritin reminds us of the main threats to IT system security and data confidentiality.
Cybersecurity Risks of Shadow AI
When data is shared with these tools through a conversational interface, it’s challenging to track where the data ends up and how it will be used. “We have seen numerous cases in the media where proprietary, confidential data was reused to train the language models that power generative AI tools.”
Another risk not to be underestimated: tools that might be useful and interesting for enhancing professional productivity can also be perfect Trojans designed for malicious purposes.
Today, building an interface that resembles ChatGPT to harvest data is quite simple in terms of coding. With AI, it’s become easier to design interfaces that inspire trust and are commonly used by everyone.
Data Confidentiality Risks of Shadow AI
The AI expert highlights the tools known as agentive AI, which are increasingly being developed in the market. He describes the situation: “The promise of some tools is to offer pre-made agent templates to perform complex multi-step tasks on your behalf. One of these templates might enable the tool to review the latest document titled ‘strategy doc.pdf’ in your Google Drive, ask it to summarize the content into 10 bullet points, and then share the results in your Slack. I’m not sure your executive committee would appreciate this.“
Another example shared by the expert: “In an IT company, developers were debugging their code directly in ChatGPT, entering proprietary code related to client projects.“
These tools have a lot of potential, but their operation raises significant data confidentiality issues, warns Thomas Gouritin.
Risks of Shadow AI Related to Tool Hallucinations
If you ask ChatGPT for the definition of a hallucination, in the context of a generative AI tool, it will reply: “[It] refers to a response produced by the model that is false, made-up, or disconnected from real data, yet is presented in a believable manner. The AI ‘hallucinates’ when it generates incorrect or unverifiable information, while presenting it as factual or plausible.“
Many professionals have experienced this kind of hallucination, when asking for a calculation with an incorrect result, or to extract data from a PDF document with a response that does not match the correct column of the document (non-confidential), for instance. Another case highlighted in the media has caught Thomas Gouritin’s attention: “lawyers, during a court session in the United States, were rebuked by judges because they had cited legal precedents generated by ChatGPT that didn’t even exist“
Can you imagine such a scenario in a business environment?
Best Practices Needed in Business to Combat Shadow AI
1. Use Generative AI Tools Validated by the IT Department
While it may seem obvious not to copy and paste sensitive, confidential data into an AI chatbot, the first recommendation is to use proprietary solutions developed internally or by a trusted publisher, which have been pre-approved by your organization’s IT department. This is to ensure the confidentiality of the data shared with the generative AI tool you are using, and to guarantee the quality of the responses generated.
I joined Smart Tribune a year ago with a deep conviction that, to fully leverage generative AI, one must have a source of truth and a knowledge base that are controlled in terms of access, security, and data quality.
2. Identify and Disseminate Good Practices Internally
If you have good practices used internally by your employees, that’s a real goldmine that is not sufficiently exploited by companies, according to the expert. His advice: “this knowledge should be identified and then disseminated in the right place, to the right people, and in a secure framework, to avoid hallucinations and errors.”
By performing this “sourcing” of current use cases, regardless of your company’s size, you can project yourself onto often simple-to-implement good practices that make more sense and have more impact, thus concretely meeting the needs of your employees who might have been using generative AI tools on their own.
3. Develop an AI Charter in Collaboration with Employees
What about the AI charter, which is becoming increasingly prevalent in companies? While this has become a standard procedure for many organizations, Thomas Gouritin believes the way it is developed could be greatly improved.
I really enjoy organizing and leading ‘choose your own adventure’ workshops, where employees really go and test for themselves. What’s most interesting is to challenge them and deliberately make them go too far by putting them in competition and asking them to speed up, so they don’t realize the limits they might be crossing, by not verifying their sources, for example…
This exercise, which relies on action and making mistakes, allows for more concrete discussions with them about what happened, the limits that can be crossed or not, and especially explaining why.
The goal: to draft this AI charter in close collaboration with your team members, so that good practices truly permeate their minds, “and it’s not just a document they sign quickly to get rid of because it wouldn’t mean much to them.“
The approach of placing employees at the heart of practices shows that what they do on a daily basis can be an example to share with their colleagues, and even beyond their own department.
Through the implementation of workshops or the organization of internal modules, you can more easily bring up the best use cases, staying as close as possible to the field, and thus disseminate them throughout your company.
This is the “positive” challenge of Shadow AI, with the great opportunities these tools can offer in a professional setting, provided their use is known, regulated, and supported, concludes Thomas Gouritin.
Similar Posts
- Shadow AI & Cybersecurity: “Regulating AI Use Without Stifling Innovation”
- AI Overwhelms 28% of French Workers: Feelings of Being Outpaced Surge
- Study Reveals: How Developers Will Use AI in 2025!
- Revolutionary AI Video Creation: Safeguard Your Brand While Innovating!
- ChatGPT Unveils Company Knowledge: A New Engine That Taps Into Your Internal Data!

Jordan Park writes in-depth reviews and editorial opinion pieces for Touch Reviews. With a background in UI/UX design, Jordan offers a unique perspective on device usability and user experience across smartphones, tablets, and mobile software.