The Security Dilemma of AI-Powered App Development

As enterprises rush to adopt generative AI (GenAI) in application development, security and governance concerns are emerging as critical barriers, according to OutSystems’ latest 2025 State of Application Development report. With 88% of IT professionals exploring AI-powered applications, the security risks associated with AI-generated code are raising red flags across the industry.

GenAI: A Security Risk in Disguise?

While GenAI enables faster software development, its potential vulnerabilities pose a significant threat to IT environments. The OutSystems report found that 62% of IT professionals cite security and governance as top concerns when using GenAI for coding. Research further suggests that AI-generated code is only accurate 65% of the time, with some tools producing code that is correct just 31% of the time—leaving organisations open to exploits, bugs, and compliance risks.

Paulo Rosado, CEO and Founder of OutSystems, cautions that unchecked AI-generated code could introduce critical vulnerabilities. “AI-written code can lead to considerable security and governance issues if left unchecked,” he warns, urging IT leaders to implement robust guardrails to mitigate risks.

The Governance Challenge

Beyond security flaws, governance remains a major hurdle. Half of IT professionals report significant challenges integrating AI into existing development workflows, further complicating oversight and compliance efforts. The complexity of monitoring AI-generated code—especially in high-stakes industries like finance, healthcare, and government—underscores the need for stricter controls and validation mechanisms.

The report also highlights a growing concern over regulatory compliance. As AI-generated applications proliferate, companies must navigate evolving data protection laws, cybersecurity frameworks, and ethical AI guidelines. Without a clear governance model, organizations risk legal repercussions and reputational damage.

Low-Code: A Safer Path Forward?

To mitigate these risks, IT leaders are increasingly turning to low-code platforms that integrate AI while maintaining security and compliance. OutSystems’ study found that organisations using low-code development report improved governance, with 62% of them able to predict budgets consistently—compared to just 52% of those relying on traditional coding methods. Additionally, low-code offers built-in security features that help enforce best practices and regulatory standards.

OutSystems has taken a proactive approach with Mentor, its AI-driven development assistant designed to embed security and governance at every stage of the software development lifecycle. By combining GenAI with low-code and AI-powered monitoring, Mentor enables IT teams to generate applications quickly—without compromising security.

Striking a Balance Between Speed and Security

As enterprises race to build more applications in less time, the tension between speed and security will continue to define the future of AI-driven development. While GenAI presents a revolutionary opportunity to accelerate software production, organizations must prioritize robust security frameworks, ongoing audits, and AI code validation processes to prevent vulnerabilities from slipping through the cracks.

For IT leaders, the challenge is clear: harness the power of AI while ensuring that security remains a non-negotiable priority. By adopting AI-powered low-code solutions with strong governance controls, businesses can innovate with confidence—without exposing themselves to unnecessary risks.