Comprehensive Analysis of AI Agents, Risks, and Secure Automation
Introduction to Agentic AI, AI Workflow and Potential Risks

Risks of AI Agents and Non-Human Identities
At Cyber Strategy Institute, we have closely tracked the explosive growth of AI automation workflows and the proliferation of non-human identities (NHIs) driven by large language models (LLMs) and retrieval-augmented generation (RAG) systems. Our 2024 AI Trend Report highlights a critical security gap: many organizations adopting agentic AI workflows fail to implement fundamental identity and access management controls for these NHIs.
NHIs frequently lack essential safeguards such as credential rotation, scoped permissions, and formal decommissioning processes. This creates a sprawling attack surface with high-risk connections ripe for exploitation. Alarmingly, our research shows that AI development teams often bypass security reviews when integrating agents into internal tooling and automation pipelines. This practice leaves sensitive APIs, secrets, and user data vulnerable to compromise by rogue AI or malicious actors.
Our findings align with the insights from GitGuardian’s State of Secrets Sprawl 2025 report, which uncovered 23.7 million secrets exposed on public GitHub repositories in 2024. This surge correlates strongly with AI agent sprawl and poor governance of NHIs. Repositories leveraging GitHub Copilot leaked secrets at a 40% higher rate, underscoring the unintended consequences of AI-assisted development.
Furthermore, our analysis of memory, compute, and persistence (MCP) environments—core infrastructure layers supporting AI agents—found that 5.2% of these servers contained at least one hardcoded secret, compared to 4.6% across all public repositories. This reveals that AI infrastructure is becoming an increasingly vulnerable focal point for attackers.
We have also observed that LLM-powered chatbots and AI systems can inadvertently expose sensitive credentials, such as those stored in Confluence or helpdesk systems. These secrets often proliferate via logs that reside in unsecured cloud storage, exponentially increasing risk.
In summary, as our 2024 AI Trend Report emphasizes, “AI is accelerating faster than our guardrails.” Until organizations treat NHIs with the same rigor as human identities—incorporating robust governance, continuous monitoring, and lifecycle management—the automation wave will continue to widen the attack surface and amplify cybersecurity risks.
For the full analysis and recommendations, see our 2024 AI Trend Report and GitGuardian’s State of Secrets Sprawl 2025 report.
Remediation Options
- Audit and Clean Up Data Sources:
Organizations should eliminate or revoke access to secrets in data sources like Jira, Slack, and Confluence to prevent LLM leaks. Tools like GitGuardian can assist in identifying and managing exposed secrets, ensuring data sources are sanitized. - Centralize NHI Management:
Centralizing secrets storage using tools like HashiCorp Vault, CyberArk, or AWS Secrets Manager enables automated rotation and enhances security. This approach reduces the risk of secrets being scattered across systems. - Prevent Secrets Leaks in LLM Deployments:
Implementing secrets detection mechanisms, such as Git hooks or code editor extensions, can catch hardcoded secrets early in the development process, particularly in feature branches. This proactive approach minimizes the chance of secrets being committed to repositories. - Improve Logging Security:
Logs should be sanitized before storage or transmission to third parties to prevent secret exposure. GitGuardian’s ggshield tool offers automated scanning for secrets in logs, ensuring compliance with security best practices identified. - Restrict AI Data Access:
Applying the principle of least privilege is crucial. For example, customer-facing chatbots should be denied access to CRM systems, while internal sales tools behind SSO can have limited access. This minimizes the potential impact of a compromised agent. - Raise Developer Awareness:
Educating developers and security teams on safe AI building practices is essential. This involves sharing guidance on processes and policies, such as avoiding hardcoded secrets and understanding the security implications of AI deployments, rather than relying solely on technological solutions.
How the Threat Can Spread
- Supply Chain Attacks:
If an AI agent with access to a software supply chain has its secrets compromised, attackers can introduce malicious code into downstream applications. For instance, a compromised AI agent managing open-source repositories, such as Hugging Face or GitHub, could poison datasets used in AI model training, affecting multiple projects. A recent example from 2025 involved a supply chain attack on a GitHub Action, exposing secrets in over 23,000 repositories. - Cloud Service Compromises:
AI agents managing cloud resources, if their credentials are exposed, can enable attackers to gain control over those resources, leading to data breaches or denial-of-service attacks. For example, a leaked AI agent’s credentials for a cloud storage service could allow unauthorized access to sensitive data, with implications for multiple cloud-dependent systems. - IoT Device Vulnerabilities:
AI agents managing Internet of Things (IoT) devices, if insecure, could lead to the creation of botnets or other attacks on connected devices. This is particularly concerning in critical infrastructure sectors, where compromised IoT devices could disrupt operations or enable large-scale attacks. - Poisoned AI Models:
If an AI model is trained on compromised or poisoned data, it can produce inaccurate or harmful results when deployed in various automation workflows. This can lead to widespread errors or security breaches across systems that rely on that model, especially if the model is shared publicly or within an organization. - Insecure API Integrations:
Automations often rely on external APIs for functionality. If these APIs are not securely configured, attackers can exploit them to inject malicious code or data into the automation processes, affecting multiple systems that use those APIs, amplifying the impact of the breach. - Shared Development Environments:
In environments where multiple developers or teams share resources (e.g., code repositories, CI/CD pipelines), a single compromised account or tool can lead to the spread of malware or unauthorized access across projects, especially if secrets are not properly managed or if there is insufficient isolation between projects. - Unsecured CI/CD Pipelines:
If the CI/CD pipelines used for deploying automations are not secure, attackers can introduce malicious code into the automation scripts or models. Once deployed, this malicious code can affect all systems that use those automations, leading to widespread compromise.

AI Automations, Multi-AI Agent Systems, and the Automation Explosion
n8n:
n8n is a workflow automation tool that allows users to create and manage workflows connecting different services. It supports external secrets management through integrations with tools like HashiCorp Vault, Infisical, and AWS Secrets Manager. This ensures that secrets are stored securely and not hardcoded in workflows, with features like expression-based secret referencing in credentials n8n External Secrets Documentation.
Make.com
Formerly known as Integromat, Make is another popular workflow automation platform. It handles credentials for connected apps securely, storing them in encrypted form and allowing users to reference them in workflows without exposing the actual secrets. Make emphasizes robust security measures, including compliance with GDPR and SOC 2 Type 1 certification, and supports single sign-on (SSO) for enterprise customers Make Security and Compliance. While specific documentation on credential management is limited, it’s clear that Make stores credentials securely, likely using encrypted storage and access controls.Curser:
Curser is an AI-powered code editor that includes features like AI agents capable of completing tasks end-to-end, such as running terminal commands and modifying files. It’s designed for developers, with capabilities like semantic code search and lint error detection Curser Features. However, as a code editor, Curser’s focus is on coding assistance, and any credentials used within the code should be managed securely, possibly using environment variables or dedicated secrets management tools like HashiCorp Vault.AgenticFlow:
It allows customization via a drag-and-drop interface, but specific details on secrets management are not publicly detailed. Given its focus on AI agents, it’s likely that AgenticFlow provides ways to manage credentials, possibly through integrations with external secrets managers or built-in features, to ensure secure interactions with external service.
Sanitization of Potential Risks While Developing Automations
- Data Minimization and Storage Limitation: Reduce data granularity, such as rounding coordinates to two decimal points, removing the last octets of IP addresses, or rounding timestamps to the hour. Use less data where possible, such as limiting datasets to 10,000 records instead of 1 million, and delete data when no longer useful, such as data from seven years ago. Remove links and identifiers, like obfuscating user IDs or device identifiers, to prevent re-identification.
- Privacy-Preserving Techniques: Implement distributed data analysis and secure multi-party computation to support data minimization, ensuring that sensitive information is not exposed during processing.
- Regular Security Checks: Conduct regular security audits and penetration testing to identify vulnerabilities in data handling and processing. Use tools like GitGuardian’s ggshield for automated scanning of secrets in logs.
- Secure Credential Management: Avoid hard-coding access rights in scripts; instead, use API calls linked to a central repository, such as HashiCorp Vault. Store all bot credentials securely in an encrypted vault and periodically revoke unnecessary privileges.
- Log Sanitization: Sanitize logs before storage or transmission to third parties to prevent secret exposure, ensuring they are clean and tamper-proof for forensic investigations.

Checks, Methods, and Risk Reduction for Developers using Agentic AI to Reduce Security Risks
Category | Checklist Items |
---|---|
Data Sanitization | – Track all training data sources. – Reduce data granularity (e.g., round coordinates). – Use minimal data and delete unnecessary data. – Remove links and identifiers to prevent re-identification. – Implement privacy-preserving techniques like distributed data analysis. |
Secure Data Handling | – Use strong privacy controls and comply with GDPR, HIPAA, CCPA. – Encrypt data and control access to the training environment. – Monitor for unusual patterns during training. – Protect inputs/outputs post-deployment; check logs regularly. – Implement data encryption, access control, and data masking. |
Model Security | – Test models against adversarial attacks and train to spot fake inputs. – Check for poisoned training data regularly. – Limit model query frequency and add security layers to hide workings. – Use explainability tools to detect backdoors. – Track model changes and maintain backups for rollback. – Document model capabilities and limitations. |
Secure Development | – Follow secure coding rules and conduct regular code reviews. – Use secure libraries and frameworks. – Apply least privilege for access control. – Conduct regular security audits and penetration testing. – Centralize secrets management using HashiCorp Vault or AWS Secrets Manager. – Store credentials in an encrypted vault, avoiding hard-coding in scripts. – Assign unique identities to each automation tool with dedicated credentials. – Use n8n or Make.com’s secure credential storage features (e.g., external secrets in n8n, encrypted storage in Make.com). |
Deployment and Monitoring | – Deploy in secure cloud environments with limited API exposure. – Regularly patch systems and monitor for anomalies. – Implement real-time alerts and log analysis. – Sanitize logs before storage to prevent secret exposure. |
Incident Response | – Have a clear plan for data breaches, model tampering, and unauthorized access. – Define roles and conduct regular drills. |
Compliance and Governance | – Ensure compliance with GDPR, HIPAA, CCPA using AI compliance software. – Conduct regular audits. |
Third-Party Management | – Vet third-party tools for security certifications. – Control access and monitor third-party interactions. |
User Education | – Train users on phishing, strong passwords, and data handling. |
Continuous Improvement | – Perform quarterly security audits and update checklists. – Stay informed through cybersecurity forums. |
Collaboration and Training | – Collaborate with developers, data scientists, and security teams. – Provide ongoing AI security education and certifications. |
Keeping Passwords and Secrets Secure in Automation
- Use Centralized Secrets Management: Platforms like n8n and Make.com support integrations with centralized secrets managers, such as HashiCorp Vault and AWS Secrets Manager, which store secrets securely and provide access controls. This reduces the risk of secrets being hardcoded in workflows or code.
- Avoid Hardcoding Secrets: Hardcoding secrets in code or configuration files is a major security risk, as seen in the GitGuardian report’s findings. Instead, use environment variables, vaults, or other secure storage mechanisms provided by the automation platform. For example, n8n allows referencing secrets via expressions, ensuring they are not exposed in workflow logs.
- Implement Least Privilege: Ensure that AI agents and workflows have only the necessary permissions to perform their tasks, reducing the impact of a potential breach. For instance, restrict access to sensitive systems and data unless absolutely required, aligning with the principle of least privilege highlighted in recent cybersecurity best practices OWASP Secrets Management Cheat Sheet.
- Monitor and Audit: Regularly monitor logs and audit trails for any unauthorized access or unusual activity related to secrets. Tools like GitGuardian’s ggshield can help detect and prevent secrets leaks, ensuring compliance with security standards. This is particularly important for multi-AI agent systems, where multiple agents may interact with sensitive data.
- Educate Users: Train developers and users on the importance of secrets management and best practices for handling credentials. This includes understanding how to use the security features of automation platforms effectively, such as n8n’s external secrets integration or Make’s encrypted credential storage. Education can help align teams on safe AI building practices, reducing human error in secrets management.
Conclusion – Cybersecurity Risks to AI Agent-Based Workflows is a Big Vulnerability

Frequently Asked Questions (FAQ)
1. What are AI agents and agentic AI, and why do they matter for cybersecurity?
AI agents, including agentic AI systems, are autonomous software entities designed to perform tasks and make AI-driven automation decisions without human intervention. In the era of agentic AI, these agents will play an increasingly important role across workflows, but also introduce unique cybersecurity risks and security vulnerabilities that require careful AI governance.
2. How do AI agent-based workflows introduce new security risks to enterprises?
AI agent-based automation often relies on credentials such as API keys and tokens, which create a host of security risks associated with AI agents. These risks include credential leaks, prompt injection vulnerabilities, and potential security breaches that increase the attack surface in enterprise environments.
3. What kinds of vulnerabilities and secrets do AI agents typically rely on, and why are they risky?
AI agents often require secrets like tokens, passwords, and API keys stored across code, logs, or configuration files. Poor secrets management leads to security vulnerabilities that attackers can exploit to manipulate AI agents or cause security breaches.
4. Why are secrets leaks becoming more common with AI development tools like GitHub Copilot and generative AI?
Generative AI tools accelerate AI development but risk exposing sensitive credentials through inadvertent suggestions or public repositories. This contributes to the risks of AI agents, including risks associated with agentic AI systems and increases the overall cybersecurity risks of AI adoption.
5. What are the main security concerns and attack vectors involving AI agent-based systems and their credentials?
Security risks associated with AI agents include supply chain attacks, poisoned AI models, API abuse, and manipulation through prompt injection. Malicious AI agents or compromised agents could lead to significant security risks, amplifying potential threats across systems.
6. How can organizations audit and clean data sources to reduce secret exposure in AI-driven automation?
Regular audits using automated tools help identify secrets leaked in source code, logs, or collaboration platforms like Jira and Slack. Implementing best practices such as centralized secrets management and continuous monitoring reduces the risks posed by AI agents and helps maintain data security.
7. What are best practices for secure AI and centralized secrets management in AI workflows?
AI best practices include using dedicated secrets managers (e.g., HashiCorp Vault), enforcing least privilege access, rotating credentials regularly, and sanitizing logs to prevent secrets exposure. These security practices minimize risks associated with agentic AI and AI agent-based automation.
8. How do popular workflow automation platforms like n8n, Make.com, Curser, and AgenticFlow handle security risks associated with AI agents?
These platforms support integration with secure secrets management systems or encrypted storage, enabling secure AI agent credential handling and reducing new security risks while enabling efficient AI-driven automation.
9. What remediation steps help reduce the risks of AI agents leaking secrets or being exploited?
Remediation includes log sanitization, enforcing AI identity security policies, continuous monitoring, developer education on secure AI practices, and implementing AI risk management frameworks that address the risks associated with AI agents.
10. How do multi-agent AI systems amplify cybersecurity risks, and how can organizations mitigate them?
As agents interact in complex workflows, the attack surface expands. Mitigation requires strong AI governance, centralized credential management, secure AI best practices, and continuous testing of AI to detect potential risks and new security vulnerabilities.
11. What role do log sanitization and monitoring play in secure AI and data security?
Sanitizing logs ensures that AI agents do not inadvertently expose secrets, while monitoring detects unusual activity linked to AI agent-based automation. These measures help prevent security breaches and reduce the risks posed by malicious AI agents.
12. What secure coding and AI development best practices should developers follow to safeguard AI-driven workflows?
Developers should avoid hardcoding secrets, leverage secure libraries, conduct code reviews focused on security vulnerabilities, use automated secret scanning, and implement responsible AI principles to harness the power of AI safely.
13. How can organizations balance the benefits and risks of AI adoption while maintaining strong cybersecurity hygiene?
By implementing robust AI governance, following AI best practices, employing centralized secrets management, continuous monitoring, and educating developers on the risks of AI agents, organizations can harness the benefits and risks of AI while minimizing cybersecurity risks.