October 11, 2025
October 11, 2025
Deploying multiple AI agents in environments like IoT and ICS/SCADA introduces significant security challenges, particularly when those agents exchange data or make control decisions autonomously. Without carefully designed guardrails, AI systems can leak sensitive data, be hijacked, or corrupt physical systems. To mitigate these risks, organizations should anchor their approach in a security framework (such as MAESTRO) that ensures accountability, traceability, and threat modeling across AI agents. Consolidating most AI workloads into a primary platform also helps centralize identity management and monitoring, reducing the risk of islanded, poorly controlled agents proliferating at the edge.
Further steps involve creating secure sandboxes for developers to test AI interactions safely, enforcing that Model Context Protocol (MCP) servers run only in trusted environments, and treating these AI systems as APIs—complete with gateways, rate limits, authentication, and logging. All interactions should use robust identity mechanisms like OAuth, and backend communications should rely on short-lived certificates (e.g. via SPIFFE) to mutually authenticate AI agents and data sources. When built in from the start, these defenses don’t slow innovation—they make it safer and more trustworthy, especially when AI begins interacting intimately with physical infrastructure.
Source: https://gca.isa.org/blog/7-practical-steps-to-secure-multi-ai-deployments-for-iot-and-ics-scada