Shadow AI & Cybersecurity: “Regulating AI Use Without Stifling Innovation”

February 9, 2026

Shadow IA & cybersécurité : « Il faut pouvoir encadrer les usages de l’IA, sans freiner l’innovation »

The informal adoption of AI in the business sector is on the rise. While it facilitates productivity improvements, it also brings a variety of risks for IT security.

Table of Contents

Generative AI technologies have seen a dramatic increase in corporate adoption over the last three years. However, this adoption has often lacked coordination within companies. Tools such as ChatGPT or Claude have frequently been deployed at the initiative of employees themselves, with the implicit approval of management who are pleased with the observed productivity gains.

BlackFog, a cybersecurity firm, has recently examined this trend in a study published on January 27, 2026, which surveyed 2,000 individuals. Below are the key statistics from this study.

Shadow AI in 2026: 5 Essential Statistics

Dr. Darren Williams, CEO and founder of BlackFog, described the study’s findings as alarmingly revealing not only the widespread use of unauthorized AI tools but also the level of risk tolerance from employees and executives. He believes the findings should “alert security teams and underscore the need for increased surveillance and visibility over these security loopholes.” While 86% of the surveyed employees report using AI tools weekly at work, the study suggests that their use introduces several blind spots.

Here are five key statistics from the survey:

  1. About half of the respondents use AI tools not approved by their employer.
  2. 60% of the panel believes that “the speed gain justifies the security risk.” Similarly, 69% of executives prioritize speed over security.
  3. 27% have admitted to sharing coworker data, while 33% have shared research data or datasets.
  4. 58% of professionals rely on free AI tools that lack enterprise-level security features.
  5. 51% have integrated AI tools into other systems without IT department approval.

Two Major Risks: Data Breaches and System Compromise

To address these issues, BDM consulted Alexandre Lohisse, SSI coordinator for Hellowork (which owns BDM). According to the expert, the informal use of AI tools by employees can pose serious security risks. “An employee using unmanaged AI systems poses a risk to their company,” he notes. For him, the primary danger is the risk of data breaches.

This data can be reused by AI to feed the model, which might then be exploited by malicious actors (foreign states, companies, etc.) for espionage, competitive advantages, or information resale. […] Their disclosure could have significant legal (GDPR compliance), economic, or reputational impacts.

Another risk is the “compromise of all or part of the company’s information system” through the use of free AI models that may be “contaminated with data poisoning, altered, or outright malicious,” adds Alexandre Lohisse. The information system could then face the introduction of malicious source code, the use of scripts generated by a malicious model, or uncontrolled task automations.

How to Ensure Secure Use of AI?

In response to the rise of shadow AI, Alexandre Lohisse emphasizes the need to frame AI usage within a formalized governance framework that organizes AI use without stifling innovation. “Shadow AI is essentially bypassing this governance framework,” he reminds us.

Companies have several tools at their disposal. On the technical side, some may choose to restrict or channel AI usage to reduce risks associated with data exposure and the use of uncontrolled tools. This approach is effective in terms of security but requires careful consideration and trade-offs.

The right approach to address these issues? It depends on where you set the balance between security and operational flexibility. A technical measure could be to only allow information flows from company-approved AI sources, which could create friction for business operations.

Organizational levers can also be utilized: regulating usage through charters and cybersecurity policies, providing employees with authorized and suitable tools, and most importantly, raising awareness. For the expert, this support is crucial given the rapid evolution and recent development of these technologies.

Similar Posts

Rate this post

Leave a Comment

Share to...