Pro Tips

The Permission Gap - Thanks AI

May 11, 2026

AI Just Made Your Security Stack 10 Years Older Overnight

Control Core · May 2026 · 5 min read · controlcore.io

The rules of digital warfare changed in 2025. By 2026, the house wasn't just on fire — attackers had learned to use your own AI to fan the flames. Here's what's coming next, and why your organization needs to act now.

A line was crossed. And most organizations missed it.

For years, the cybersecurity conversation centered on patching vulnerabilities, training employees to spot phishing emails, and hardening perimeters. It was slow work — but it was manageable. Attackers moved at human speed.

That era is over.

In May 2026, Google's Threat Intelligence Group (GTIG) published findings that should stop every technology executive in their tracks. For the first time in recorded history, GTIG confirmed that a threat actor used an AI-generated exploit to discover and weaponize a zero-day vulnerability — a 2FA bypass in a widely used web administration tool — and was actively planning a mass exploitation event.

Let that sink in. A bad actor prompted an AI, got an exploit, and was days away from hitting thousands of organizations simultaneously. No team of expert hackers. No expensive zero-day marketplace. Just a model and a browser.

"We have high confidence that the actor likely leveraged an AI model to support the discovery and weaponization of this vulnerability." — Google Threat Intelligence Group, May 2026¹

The industrialization of cyberattacks

GTIG's 2026 report describes something they call a "maturing transition" — from early experimentation with AI to what they term the industrial-scale application of generative models within adversarial workflows.¹ This isn't hyperbole. The numbers tell a vivid story.

Signal

What It Means

85,000+ vulnerability cases

State-sponsored actors built a specialized AI training dataset from a decade of real-world exploits, using it to steer models toward attack-ready code analysis¹

Thousands of daily prompts

APT45 (North Korea) sent thousands of repetitive, automated prompts to recursively analyze CVEs and validate proof-of-concept exploits — at a scale no human team could match¹

4 new AI-enabled malware families

PROMPTFLUX, HONESTCUE, CANFAIL, and LONGSTREAM all leverage LLMs for real-time obfuscation and defense evasion — rewriting themselves to stay invisible to traditional scanners¹

4+ major supply chains compromised

LiteLLM, BerriAI, Trivy, and Checkmarx were each targeted to steal AI API secrets and pivot into enterprise infrastructure¹

State-sponsored groups are integrating specialized vulnerability databases directly into AI workflows to fine-tune models for exploit discovery. Malicious actors are now using LLMs to generate polymorphic malware that rewrites itself to evade detection. And a criminal middleware ecosystem has emerged — built specifically to pool stolen AI API keys, bypass usage limits, and run attack operations at cloud scale, anonymously.¹

One malware family warrants special attention: PROMPTSPY, an Android backdoor that uses Google's Gemini API to autonomously navigate a device's interface, capture biometric authentication patterns, and maintain persistence — all without any human operator in the loop.¹ Attackers are no longer just writing malware. They're deploying autonomous agents. The human has left the kill chain.

Your API is now the front door — and it's wide open

Here's the reality most executive teams are still not grappling with: the attack surface has fundamentally relocated. It's no longer just your network perimeter or your endpoints. It's every API call your AI tools make, every LLM integration your developers shipped last quarter, and every third-party library your orchestration layers depend on.

GTIG documented how cybercriminal group "TeamPCP" compromised popular GitHub repositories — including LiteLLM, a widely used AI gateway library — to embed a credential stealer called SANDCLOCK directly into build environments. Any organization running affected packages quietly had their AWS keys, GitHub tokens, and AI API secrets exfiltrated. The stolen credentials were subsequently sold to ransomware groups.¹

And it gets more specific. GTIG also surfaced a thriving underground economy of what they call "obfuscated LLM access" — purpose-built middleware that aggregates stolen API keys from Anthropic, OpenAI, and Google across hundreds of automated throwaway accounts, enabling attackers to run large-scale operations at shared cost while evading bans.¹ They found a tool called "Claude-Relay-Service" explicitly designed for this purpose — pooling AI accounts to run adversarial workflows with anonymity and scale.

The implication is stark: if your AI tools are accessible and over-permissioned, they will be found, probed, and weaponized — whether by an external attacker or a compromised dependency quietly running in your pipeline.

Three predictions every enterprise leader should write down

By 2027, AI-generated zero-days will be commodity weapons. The GTIG report documents the first confirmed case — but the underlying capability is only improving. Frontier models are getting better at contextual code reasoning and identifying the kind of high-level semantic logic flaws that traditional static analyzers completely miss.¹ The zero-day market will fragment: nation-state quality exploits in the hands of motivated criminal groups with nothing more than a premium API subscription.

Agentic AI systems will become the primary attack surface within 18–24 months. GTIG already documents sophisticated actors deploying multi-agent penetration frameworks — systems like Hexstrike and Strix that autonomously pivot between targets using AI memory graphs to track attack state.¹ The agentic workflows your teams are building now to automate procurement, HR, and finance will be the highest-value targets in your organization. They have permissions. They have context. And most have no governance layer.

Regulatory mandates for AI access control will arrive faster than most organizations are ready for. The EU AI Act, Canada's evolving AIDA framework, and emerging SEC disclosure requirements for AI-related breaches are all converging on one requirement: accountability for what your AI systems can access and do. Organizations that build governance infrastructure now will have a meaningful compliance lead — and a stronger security posture — when the deadlines hit.

The permission gap has a name — and a fix

Everything above points to one inescapable conclusion: the problem isn't just that attackers are faster. It's that most organizations have never had a consistent, enforceable way to govern what their digital tools — AI or otherwise — are actually allowed to do. Rules have lived in documentation, in tribal knowledge, in the good intentions of developers shipping fast. That is not governance. That is hope.

That gap is exactly what Control Core was built to close.

Control Core is a Policy-Based Access Control (PBAC) platform designed from the ground up for the AI-native enterprise — and equally capable of governing a 20-year-old database stack sitting next to your newest LLM integration. Think of it as the permissions layer that should always have existed, finally built for the world that actually exists in 2026.

What makes Control Core different from legacy PBAC solutions is that it was architected specifically to manage AI token flows, apply semantic guardrails in real time, and govern transactions between digital tools that didn't exist five years ago. It works across your entire stack — from your newest GPT-4o agent to the Oracle database your team has been running since 2004.

Critically, Control Core is not a man-in-the-middle solution. Architects are right to be cautious of intermediary approaches that introduce latency, single points of failure, or opaque data handling. Control Core operates as an interceptor and enforcer — a digital permissions bouncer that sits at the transaction boundary and applies your rules, without becoming a liability in your architecture. It enforces what applications cannot otherwise do without expensive custom development, at a fraction of the cost and effort.

Here's what that looks like in practice:

  • Access control: Fine-grained permissions on every API, application, and data source — eliminating the over-permissioning that turns AI tools into liabilities overnight.

  • Real-time guardrails: Context-aware decision rules for regulatory compliance, security policies, and business logic — enforced at the transaction level, before sensitive data moves.

  • Cost optimization: AI token usage and API spend governed by policy, not by whoever happened to write the integration. Governance is also financial discipline.

  • Organizational sovereignty: You own your rules, your policies, and your data. No vendor lock-in, no dependency on a third party's opaque model to decide what's allowed.

Control Core is built in Canada 🍁 — with organizational sovereignty as a design principle, not a marketing line. The goal is to make technology innovation a real accelerator, not something held hostage by security gaps, access control debt, or regulatory uncertainty. Security and compliance should be the thing that lets you move faster — not the reason you slow down.

The organizations that will win the next decade aren't the ones with the most AI deployed. They're the ones with the most AI they actually trust and control.

Interested in seeing how Control Core deploys across your stack? Visit controlcore.io

Citations

¹ Google Threat Intelligence Group (GTIG). "GTIG AI Threat Tracker: Adversaries Leverage AI for Vulnerability Exploitation, Augmented Operations, and Initial Access." Google Cloud Blog, May 11, 2026. cloud.google.com/blog/topics/threat-intelligence/ai-vulnerability-exploitation-initial-access

² IBM Security. "Cost of a Data Breach Report 2025." IBM Corporation, 2025. Average breach cost figures cited reflect the 2025 annual study results.

All threat actor designations (APT27, APT45, UNC2814, UNC5673, TeamPCP/UNC6780), malware family names (PROMPTSPY, HONESTCUE, CANFAIL, LONGSTREAM, PROMPTFLUX, SANDCLOCK), and tool names (Hexstrike, Strix, Claude-Relay-Service, LiteLLM) referenced herein are drawn directly from the GTIG report cited above. Forward-looking predictions represent the editorial analysis of the Control Core team.