AI Automation Boom and the Non-Human Identity Crisis from AI Risk

AI Automation Boom and the Non-Human Identity Crisis

Overview of AI Agents and Automation:

Modern businesses increasingly use no-code AI automation platforms – from workflow tools like Zapier, Make.com (Integromat), and n8n to AI-driven agents (e.g. AgenticFlow marketing flows or coding assistants like Cursor) – to save time and reduce staffing. In practice, these tools function like digital assistants or “robotic staffers,” each carrying its own login credentials or tokens. Like human employees, these AI agents need access cards (API keys, OAuth tokens, etc.) to perform tasks (sending emails, querying databases, posting messages). Unlike human accounts, however, non-human identities (NHIs) often have long-lived keys, few rotation policies, and little oversight. Imagine a sprawling factory floor where new robot workers are added daily, each given badges that access sensitive areas – it’s easy for hidden entry points and untracked access to emerge.

AI Automation Boom and the Non-Human Identity Crisis from AI Risk

The Rise of Non-Human Identities and Secret Sprawl:

 Each new AI-enabled workflow quietly creates another machine identity. On average, organizations today juggle dozens of machine accounts per employee (some report ~45 bots for every human). These can be service accounts, integration bots, chatbots, CI/CD pipelines, or AI agents tied to various apps. Each account needs a secret (password, API key, certificate, OAuth token) to connect to other systems. Without strict governance, these secrets proliferate unchecked. In 2024 alone, GitGuardian found over 23.7 million new hardcoded secrets on public GitHub – a 25% year-over-year surge – and most leaked credentials remain valid for months or years. For example, using AI coding assistants correlates with higher leaks: GitHub repos with Copilot enabled leaked secrets 40% more often than average. In short, the convenience of AI and AI automation is feeding a credential sprawl: machine identities outnumber people, secrets multiply, and bad actors only need one weak link to gain a foothold.

Amplified Risks in Chained Workflows (A2A, RAG, and Orchestration)

Agent-to-Agent (A2A) Threats:

Modern AI agents increasingly coordinate with each other (A2A) and with data tools. In these multi-agent systems, one compromised component can poison the rest. For example, naming and impersonation attacks can trick agents into talking to malicious “look-alike” services. Research has shown that if a fake MCP/A2A server or Agent Card uses a deceptive name (e.g. finance-tool-mcp vs. financial-tools-mcp), an AI agent may unwittingly connect to the bad endpoint. This can let an attacker capture access tokens or sensitive data intended for a legitimate tool. In effect, one rogue agent in a network of agents can siphon secrets or inject malicious instructions into the workflow chain.

Retrieval-Augmented Generation (RAG) Risks:

 Many automation bots now use LLMs with RAG pipelines – pulling in data from internal knowledge bases (Wiki, Confluence, vector DBs) to answer questions. This can inadvertently expose secrets. For instance, suppose an AI chatbot is fed an internal Confluence page that contains a plaintext API key. The bot might regurgitate that key to any user who asks the right question, or leak it in logs. In one example, a support chatbot retrieved developer credentials from an internal page and suggested them to a user, all without an admin realizing the leak. Even without malice, RAG agents process prompts and data that often include sensitive content (PII, internal notes, financial figures, etc.). If the RAG pipeline or vector database is misconfigured or exposed, all that private information can spill out. In fact, a recent security report found dozens of open vector DB instances on the public internet containing corporate emails, customer PII, and more. Worse, an attacker can pollute or corrupt an exposed vector store so that every future query returns poisoned or malicious data, without anyone noticing. In complex automated flows (for example, a webhook in Zapier calling an external AI service, which then posts a result back into Slack), a single misused token or stray data leak can cascade through multiple systems.

Orchestration Overreach:

The more services are chained together, the more subtle failures can propagate. Automation platforms often connect dozens of tools via OAuth or API bridges. If any connector holds broad access (e.g. a Zapier “team” with admin rights), an attacker can exploit that to move laterally. Ghost login attacks illustrate this: even after a user changes their password, an existing workflow’s OAuth tokens may remain active, allowing a persistent backdoor. In Zapier, for example, a hacker who had linked to a user’s Dropbox via Zapier was still able to siphon off data after the user reset their login, because the OAuth token in the Zap remained valid. Similarly, orchestration errors (misconfigured filters or multi-step actions) can cause an agent to flood sensitive outputs into the wrong channel or continue a task past its intended scope. In short, chains of AI actions multiply the stakes: a leak or compromise in one link often propagates to others.

Real World Leaks

Real-World Leak Examples

  • Zapier Code Repository Breach: In Feb 2025 Zapier disclosed that attackers had compromised its internal GitHub code repos. During an audit, Zapier discovered that customer data (used for debugging) had been inadvertently copied into those repos, exposing it to the intruder. Although Zapier said tokens in live databases were not compromised, this incident shows how even non-production data (or screenshots, logs) handled by automation vendors can leak.

  • GitHub Actions Supply-Chain Attack: In March 2025 a popular CI/CD workflow action (tj-actions/changed-files) was backdoored (tracked as CVE-2025–30066). The malicious version scanned the GitHub runner’s memory and extracted secrets (AWS keys, GitHub PATs, private keys, NPM tokens) from environment variables. It then encoded and printed them in the build logs, bypassing GitHub’s secret-masking. Public repos using the compromised action immediately had all their secrets published live. This highlights how a single tainted automation component can breach entire cloud environments.

  • “Ghost” OAuth Persistence (Zapier): As explained above, security researchers demonstrated that in Zapier, a hacker who gains initial access (e.g. to a user’s Dropbox) can “ghost” their own Zapier account into that workflow. Even if the user later changes their password or revokes the Dropbox OAuth, the attacker’s Zap remains connected and continuously grabs new files. This attack evades typical token revocation, making automation platforms a kind of hidden persistence mechanism.

  • Exposed Vector Databases: A 2024 study found around 30 unauthenticated vector DB instances on the open web, filled with sensitive corporate data. Some even contained API keys for other AI services (Pinecone, etc.), meaning an attacker could jump from one breach to another. Any AI agent relying on those exposed stores could either leak or ingest poisoned data as mentioned.

Automation Platforms and Their Secrets

PlatformTypical Secrets and Credentials
ZapierOAuth tokens for integrated apps (e.g. Google Workspace, Slack, Dropbox), API keys (e.g. Twilio, SendGrid), Webhook URLs/Tokens, and stored passwords for some legacy services.
Make.com (Integromat)OAuth or API credentials for apps (e.g. Gmail, Asana), HTTP module API keys, database passwords, and custom webhook/API secrets.
n8n (self-hosted/cloud)API keys and OAuth tokens for any connected service (GitHub tokens, AWS keys, etc.), environment secrets for workflows, and credentials for HTTP requests.
AgenticFlow (AI marketing)API keys and tokens for ad platforms (Facebook, Google Ads), email/SMS gateways (SendGrid, Twilio), CRM accounts, and any custom API integrations.
GitHub ActionsSecrets (GitHub-encrypted vars) like cloud credentials (AWS Access Key, Azure tokens), SSH keys, and GitHub Personal Access Tokens stored in workflows or environments.
Cursor (AI Code Editor)Uses user’s IDE/session tokens (e.g. GitHub login tokens) and may store its own API key for AI model access. (It has no built-in integrations, but connected code repos could expose secrets.)
General Webhooks/APIsAny service using webhooks (e.g. Slack incoming webhooks, Microsoft Teams connectors) will have secret URLs or tokens embedded.

Each of these platforms holds keys with broad reach: OAuth tokens often grant full read/write to an application, and API keys can allow transactions or data dumps. For example, a Zapier OAuth token for Google Drive lets a Zap read all a user’s files. If that token leaks, the attacker gains complete access to that Google account’s data.

Propagation of Risk: Emerging Threat Vectors

  • Open APIs and Plugins: Many teams connect AI agents to public APIs or develop custom plugins. If an agent calls an external API, the transmitted data (including secrets or PII in prompts) could be logged or intercepted. A misbehaving plugin might also exfiltrate context. Similarly, if an enterprise API key for an AI service (like OpenAI’s API) is left unsecured, any connected bot or user could exploit it to run up usage or access proprietary prompts/responses.

  • Vector Database Poisoning: As noted, exposed vector stores are magnets for attackers. Even if not directly stolen, an attacker could insert malicious embeddings or erase records. For instance, a compromised store could supply a trojan image or malicious document to any downstream RAG agent, causing it to execute harmful code or propagate wrong information. Because these databases often lack visibility, such tampering can silently poison every agent that queries them.

  • Memory Plugins and Chatlogs: Some AI systems offer “memory” features that log user inputs and outputs. If misused, this could persist secrets across sessions. A chatbot storing conversation history might inadvertently log a password or API key shared in chat. If that storage is not properly secured (or is accessible to third-party providers), those secrets leak over time. While not yet a reported breach, it is a plausible future attack: an intruder gaining read access to an AI memory store could harvest confidential info from past dialogues.

  • Cascading Workflow Failures: Complex workflows can multiply errors. For example, one misconfigured step might forward an entire database query result (with credentials) to a later step that emails it to a marketing list. Another scenario: a Slack bot created by an automation might reply to the wrong channel if its permissions are too broad. Any single failure mode can cascade along the chain of connected automations, compounding damage.

Best Practices

Mitigation and Best Practices

AI automation will become critical, we are not the only ones looking at it. The GitGuardian security framework for AI agents highlights five key controls – Audit, Centralize, Prevent, Improve, and Restrict – to manage non-human identities and secrets. In practice, organizations should adopt secrets hygiene and governance akin to that for human accounts. First, inventory all automation accounts and the credentials they hold (build a data flow map). Audit and Clean Up: Eliminate or rotate any hardcoded secrets in source files, docs, or automation steps. Use tools (like GitGuardian or Reco) to scan code repositories, logs, and cloud configurations for stray keys. Once a credential is exposed, revoke or rotate it immediately (remember that 70% of leaked secrets remain valid for years if untouched). Where possible, move to dynamic, ephemeral credentials (e.g. cloud-managed tokens, short-lived IAM roles) instead of static long-lived keys.

Least Privilege and Access Control: Grant each automation the minimal permissions it needs. For example, if a Zapier workflow only needs to post to one Slack channel, use an app credential restricted to that channel – not a global workspace token. Implement strict approval processes for any new bot or integration. Use enterprise versions of tools when available, as they offer audit logs and managed security. Enforce 2FA on all related user accounts, and review OAuth grants regularly to catch “ghost” connections.

Secure RAG Pipelines: For workflows using retrieval-augmented generation or vector stores, lock down those data sources. Ensure any knowledge base or database behind an AI has robust authentication (turn off anonymous access). Tokenize or mask truly sensitive fields so they never enter the AI context. Encrypt data at rest and in transit, and monitor access to vector DBs. Filters should sanitize AI outputs: for instance, program the agent to redact any value matching a credential pattern before returning a response. Be especially cautious with public or personal data: as one analysis warns, RAG systems often ingest sensitive user notes or PII, which must be prevented from leaking.

Continuous Monitoring and “Secure by Design”: Treat AI automations like any critical software. Follow frameworks (such as DHS/CISA’s AI Risk Management Guidelines) that emphasize risk mapping and secure design. Maintain an up-to-date inventory of all AI use cases and the data they touch. Implement logging on every step of your automation flows and regularly review them for anomalies. When integrating third-party agents or APIs, scrutinize their trust boundaries: one best practice is to “sandbox” new workflows with non-sensitive data before granting full access.

Educate and Govern: Finally, ensure the teams (including non-technical marketers and growth hackers) understand these risks. Enforce policies that explicitly forbid uploading classified or regulated data into public AI tools. Provide clear credential management guidance: use vaults or CI secrets storage, never email keys, and rotate passwords routinely. As GitGuardian advises, prioritize both early detection (scanning) and swift remediation (revoking keys) if a leak is found. By centralizing the management of all machine identities and applying these controls (audit trails, limited scopes, and continuous monitoring), organizations can enjoy the speed of AI automation while keeping secrets and identities under control.

Table 1. Common Automation Tools and the Credentials They Touch

Platform Typical Stored Secrets and Credentials
Zapier OAuth tokens for connected apps (Google Workspace, Slack, Dropbox, Twitter, etc.); API keys for services like Twilio or Mailgun; Webhook URLs with embedded secrets; stored passwords for some legacy apps.
Make.com (Integromat) API credentials (OAuth tokens or keys) for cloud apps (Gmail, Facebook Ads, AWS, etc.); HTTP module keys and webhooks; database credentials (MySQL, PostgreSQL) used in workflows.
n8n (self-hosted/cloud) API keys and OAuth tokens for any integrated service (GitHub, GitLab, Google APIs, etc.); webhook secrets; environment variables and credentials (e.g. FTP/SFTP, databases, AWS IAM keys) stored in its database.
AgenticFlow (AI marketing) API keys/tokens for ad platforms (Google Ads, Facebook Ads, LinkedIn); CRM/API credentials (Salesforce API tokens); email/SMS gateway keys (SendGrid, Twilio); any HTTP/webhook secret used in automated campaigns.
GitHub Actions GitHub-encrypted Secrets such as cloud provider credentials (AWS_ACCESS_KEY, Azure Client Secret, GCP JSON key); SSH keys; third-party tokens (Docker Hub API key, npm token) referenced in workflows.
ChatOps/Bot Frameworks Tokens and keys for services bots interact with (e.g. Slack Bot Tokens, Microsoft Teams App Secrets, CI pipelines). These often span multiple tools and can indirectly touch OAuth tokens or API keys.

Each entry above represents high-value credentials. In marketing and growth automation, these typically include OAuth access to social media/business platforms, service API keys for emailing or analytics, and webhooks that carry data between systems. If any of these secrets are exposed (for example, leaked in a public workflow file or intercepted via a compromised agent), attackers can hijack entire services or exfiltrate sensitive customer data.

Summary and Next Steps

The convergence of AI agents and no-code automation has turbocharged productivity—but it has also created a non-human identity crisis. Machine accounts now far outnumber people, and every new AI-powered workflow adds another potential leak point for credentials and secrets. The risk multiplies when agents communicate (A2A) or fetch data (RAG). To stay secure in this environment, organizations must treat every bot and flow like a first-class citizen in their security policy: govern, monitor, and minimize its access and data. By applying strong secret management (rotation, vaults, push protection), strict access controls, and the auditing and governance guidelines outlined above, teams can harness AI automation with confidence that sensitive passwords and data won’t slip through the cracks.

Sources: Industry reports and analyses underscore these concerns. For instance, GitGuardian’s State of Secrets Sprawl 2025 documents the explosive growth of exposed credentials. Cybersecurity experts also highlight the need to inventory AI use cases and enforce security-by-design for all AI systems. Real-world incidents (e.g. Zapier’s 2025 breach, GitHub Actions supply-chain hacks) illustrate how intertwined AI automation and security really are. The guidance above draws on these analyses to help marketers and growth engineers deploy AI tools safely, with a clear strategy for protecting credentials and identities in an automated world.

FAQ

FAQ: AI, Automation, and Risk Management

Q: What is Artificial Intelligence (AI) and how does it work?

A: AI, or artificial intelligence, is a branch of computer science focused on creating machines capable of performing tasks that typically require human intelligence. AI systems use algorithms and machine learning techniques to analyze data, learn from it, and make decisions or predictions. AI can automate various processes, from simple tasks to complex decision-making, by leveraging large datasets and computational power. AI-powered tools, such as chatbots and AI agents, are increasingly used to enhance efficiency and reduce the need for human intervention. For example, generative AI tools like ChatGPT use deep learning and large language models to create content, showcasing AI’s capabilities in mimicking human creativity.

Q: What are the benefits of using AI in business?

A: AI offers numerous benefits to businesses, including increased efficiency, cost savings, and improved decision-making. By automating repetitive tasks, AI allows employees to focus on more strategic activities, streamlining business processes. AI-driven tools can analyze vast amounts of data quickly, providing valuable insights for better decision-making. Additionally, AI can enhance customer experiences through personalized recommendations and AI chatbots that provide instant support. Businesses can leverage AI to optimize workflow, reduce costs, and explore new use cases, but they must maintain oversight to address potential risks like security vulnerabilities.

Q: What is automation and how does it relate to AI?

A: Automation refers to the use of technology to perform tasks with minimal human intervention. AI is a key driver of automation, as it enables machines to learn and adapt to new situations. AI-driven automation can streamline business processes, reduce errors, and increase productivity across industries. For instance, AI automation powers tools like chatbots and AI agents, which handle tasks such as customer inquiries or data processing. However, automation requires human oversight to manage risks associated with it, such as vulnerability to cyberattacks or misuse by bad actors, ensuring safe and effective use of AI.

Q: What are the AI risks of automation?

A: While automation offers significant benefits, it also introduces several risks. A primary concern is the potential loss of jobs, as machines can replace human workers in certain roles. Additionally, automation can create security vulnerabilities if not properly managed—automated systems ai may be exploited by bad actors through cyberattacks or misuse. For example, hardcoded secrets in automation tools can lead to breaches, as seen in cases where AI could expose sensitive data. To mitigate risks, organizations must implement robust security practices, rotate credentials, and maintain control of AI and automation processes.

Q: What are the main risks associated with AI use?

A: AI poses several risks of AI, including bias, lack of transparency, and potential misuse. AI systems can perpetuate ai biases present in training data, leading to unfair or discriminatory outcomes. The complexity of AI models can obscure decision-making processes, raising concerns about accountability and necessitating explainable AI. Moreover, AI’s risks include its potential use for malicious purposes, such as creating deepfakes or automating cyberattacks, amplifying the dangers of AI. AI security must address these key risks by ensuring trust in AI and protecting against security challenges posed by bad actors.

Q: How can we manage the risks of AI tools?

A: Managing the risks of AI requires a comprehensive risk management approach. Developers should use diverse training data to reduce ai biases and adopt explainable AI to enhance transparency. Establishing clear regulations for AI use can prevent misuse, while regular security testing identifies vulnerabilities. AI safety depends on human supervision to oversee AI systems and address security risks. For instance, AI training should incorporate best practices to mitigate risks, and businesses must monitor AI technologies to ensure they align with ethical standards and protect against potential AI threats like data breaches.

Q: What is generative AI and how does it work?

A: Generative AI is a subset of AI that creates new content—such as text, images, or music—based on patterns learned from existing data. It relies on deep learning techniques and large language models, as seen in generative AI tools like ChatGPT from OpenAI. AI-powered generative AI can automate creative tasks, such as writing or design, by mimicking human creativity. However, it introduces potential risks, such as generating misleading content or exposing sensitive data, requiring careful oversight to ensure safe use of AI and protect against security vulnerabilities.

Q: What are AI agents and how do they differ from traditional AI systems?

A: AI agents are autonomous systems that perform tasks or make decisions independently, unlike traditional AI systems designed for specific functions. Powered by artificial intelligence, AI agents can adapt to new situations and learn from experience, making them ideal for automation in areas like customer service or marketing. Agentic AI, a type of AI agent, exemplifies this adaptability but raises concerns about accountability and human oversight. Compared to static AI tools, AI agents offer advanced AI capabilities, though their autonomy increases the level of risk without proper security practices.

Q: What are the biggest challenges in AI development?

A: The biggest challenges in AI development include ensuring ethical use of AI and addressing security risks. Developers face difficulties in eliminating ai biases, securing vast amounts of training data, and building trust in AI. The shortage of skilled professionals to train AI models and manage AI systems adds complexity. AI could be used maliciously if not governed properly, amplifying the impact of AI on global security. Collaboration among stakeholders is essential to overcome these security challenges and ensure AI technologies are developed responsibly with robust risk management.

Q: How can businesses leverage AI to stay competitive?

A: Businesses can leverage AI by integrating it into operations to automate tasks and enhance customer experiences. AI-driven automation optimizes workflow, while AI tools like chatbots improve service delivery. Investing in AI training equips teams to use AI effectively, unlocking innovative use cases. However, leveraging AI requires risk management to address vulnerabilities and ensure compliance with regulations. By balancing the benefits and risks, companies can use AI automation to stay competitive, provided they implement security testing and maintain human oversight to safeguard against increasing the risk of breaches.