A recent study indicates that a third of developers believe that more than 60% of their coding is powered by AI. While AI speeds up software development, it also introduces a new era of vulnerabilities.
Scientists confirm: This is the most effective way to get your cat’s attention, according to new research
Elderly Couple Refuses Reserved Seats—Viral Train Standoff Sparks Fiery Debate on Courtesy
The rapid acceleration of software development is transforming security practices. An international study conducted from 2023 to 2025 with over 4,500 tech professionals – developers, application security (AppSec) managers, and chief information security officers (CISOs) – presents a troubling finding. Despite heightened risk awareness, companies continue to knowingly release vulnerable code.
The survey, conducted by Censuswide across nine countries and published by Checkmarx, highlights a widening gap between the adoption of new technologies (generative AI, cloud architectures, distributed software chains) and organizations’ actual ability to secure their applications.
The report identifies several key trends: the rise of Shadow AI, the normalization of the “patch later” approach (fixing after deployment rather than securing beforehand), the still limited use of available security tools, and a persistent divide between development and security teams. More broadly, it questions how AI, currently a code generator, could become a key player in application defense tomorrow.
AI: The new driver of development and risk
The Censuswide study reveals how integral artificial intelligence has already become in the software development cycle. A third of the developers surveyed acknowledge that over 60% of their code is AI-generated, with this figure sometimes exceeding 80%. AI has thus transitioned from an occasional helper to a significant driver of production.
The issue, researchers point out, is that this rapid adoption has occurred without robust safeguards. Less than one in five companies has established an official list of authorized tools with clear usage guidelines. Consequently, many developers turn to unvalidated assistants, a phenomenon known as Shadow AI.
This lack of governance creates a blind spot. AI tools may produce functional code, but it’s often riddled with security flaws that aren’t immediately apparent. Without clear traceability of code origins, security teams lose visibility, and vulnerabilities accumulate in projects, sometimes on an industrial scale due to the speed of generative tools.
“We’ll deliver anyway”: The culture of patching later
One of the most striking findings of the study is that 81% of organizations admit to deliberately releasing vulnerable code into production. In other words, many security flaws are not accidents but conscious compromises.
Business pressures largely explain this trend. Teams are required to deliver faster, meet the demands for new features, and adhere to tight deadlines. In this environment, security becomes an adjustable parameter. Many then adopt a patch-later logic, which involves launching the product first and fixing flaws later. The issue is glaring because attackers don’t wait. Today’s automation tools used by cybercriminals allow them to detect and exploit some vulnerabilities almost immediately upon deployment.
Another statistic underscores this shift: nearly a third of developers have hoped that vulnerabilities would go unnoticed once the code is in production. For researchers, this is no longer just a strategy dictated by speed but a sign of a culture of resignation where security is secondary to the urgency of delivery.
The consequences are numerous: exploited vulnerabilities increase, the costs of post-incident corrections soar, and, most importantly, users lose trust. Ultimately, tech teams find themselves caught in a perpetual dilemma, the study notes. On one hand, there’s the pressure to stay competitive by delivering quickly. On the other, a technical and security debt that invariably catches up with the company.
Security tools exist but remain underused
Despite the availability of mature tools, the study shows that companies still struggle to integrate them effectively into their practices. Less than half of the organizations actually use proven technologies like Dynamic Application Security Testing (DAST), Infrastructure-as-Code scanning, or API security testing. Many already have the necessary “weapons” but fail to fully utilize them, researchers note.
This gap creates a false sense of security. The solutions are there, but they are not deployed consistently nor integrated into the environments where developers work daily. This lack of integration also leads to fragmented practices. Each tool issues its alerts independently, without providing a unified view of risks. Teams are then overwhelmed by notifications, unable to prioritize what needs to be corrected urgently. Security, therefore, becomes a source of operational fatigue rather than assistance, presenting a paradox where mature technologies are available, but their adoption and orchestration are lacking.
Ongoing divide between developers and security
The report also highlights a lasting tension between developers, tasked with delivering quickly, and security teams, responsible for application reliability. On paper, developers are taking on more responsibility by fixing more vulnerabilities than in previous years. But in practice, collaboration remains uneven.
The figures show it clearly, with nearly one-fifth of security chiefs (CISOs) believing that developers fix almost all flaws. However, only 7% of AppSec managers share this view. This perception gap reveals a real visibility issue. Senior management sees fewer critical incidents, but field teams observe an accumulation of secondary flaws, eventually creating a significant technical debt.
For developers, the situation isn’t much simpler. As mentioned, the pressure of deadlines sometimes pushes them to deliver imperfect code, hoping some vulnerabilities will go unnoticed. This misalignment results in communication issues and poorly defined priorities. Without common dashboards and shared tools, different parties work with different sets of data. The study finds that leaders overestimate the level of security achieved, while developers and AppSec managers deal with a more chaotic reality.
Towards AI-driven application security
While artificial intelligence is currently seen as a risk factor by generating hard-to-control code, it could also become one of the best tools for enhancing software security. The report emphasizes that the production speed induced by AI surpasses human control capabilities. Manual code reviews are no longer sufficient to keep up with the pace.
A new phase is emerging, with AI as a defender. Already, half of the organizations use AI-based security assistants, and 40% plan to invest in them in the coming years. These tools don’t replace developers but can analyze code in real-time, suggest corrections, or even train teams directly in their working environment.
The study reveals that professionals are also ready to delegate certain critical tasks to AI, such as providing security recommendations (39%), suggesting fixes that developers then approve (37%), or offering training sessions when vulnerabilities are detected (39%), which would streamline security processes, according to the study.
But for this promise to become a reality, a clear framework is still needed. As long as the use of AI assistants remains anarchic, the risks created on the development side cannot be compensated, the study concludes. The real challenge, therefore, is to strictly regulate generative AI practices while simultaneously deploying automated agents capable of securing the production chain as fast as the machines feeding it.
Similar Posts
- Study Reveals: How Developers Will Use AI in 2025!
- AI and Developers in 2025: Insights from Google’s Revealing Report
- AI Slows Developers by 19% in Problem-Solving: New Study Reveals
- Mistral Code: Revolutionize Software Development with New AI Agent!
- Hackers Shift Headquarters: Is Your Favorite Collaboration Platform Their New Den?

Jordan Park writes in-depth reviews and editorial opinion pieces for Touch Reviews. With a background in UI/UX design, Jordan offers a unique perspective on device usability and user experience across smartphones, tablets, and mobile software.