June 27, 2025
June 27, 2025
When organizations start using AI, they often begin with small, manageable systems—what Noelle Russell calls "baby tigers." But as these systems grow into full-scale deployments, they can become risky "adult tigers" without proper oversight. Scaling AI quickly brings three major dangers:
A key challenge is that each of these risk areas—accuracy, fairness, and security—typically falls under different teams. Accuracy is often managed by data teams, fairness by inclusion officers, and security by cybersecurity professionals. Bringing them together early in AI projects is critical to avoid blind spots.
Rather than bolting on security at the end of development, organizations should embed it into their "DNA." That means inviting legal, DevSecOps, and security experts into the process from day one—making protections as natural and integrated as water in a wave.
Noelle Russell also recommends expanding existing data governance structures to cover AI systems, rather than creating separate governance bodies—transforming data governance into comprehensive AI governance. To fund this properly, she suggests earmarking around 25% of expected AI-generated revenue to invest in governance and security capabilities.
Cultural change is just as important. Teams need to stay curious and constantly question AI systems: Where does the data come from? How was this conclusion reached? Who should review this? Red-teaming exercises—“breaking” the AI—should involve cross-functional participants, from executives to engineers, looking for vulnerabilities and bias.
Finally, with new regulations like the EU AI Act and emerging U.S. policies, organizations would be wise to adopt existing frameworks—like U.S. government AI strategies or guidelines from leading AI providers—instead of starting from scratch. AI audits, much like financial audits, will become standard in regulated industries, perhaps even performed by other AI systems.
In short, scaling AI responsibly means preparing for the full-grown tiger: tackle the three-headed risk—accuracy, fairness, and security—from the start. Build systems with governance and security embedded, fund it proactively, foster a skeptical and collaborative culture, and leverage existing regulatory frameworks. Only then can organizations scale AI without it turning into a threat to itself.
Source: https://www.paloaltonetworks.com/blog/2025/06/hidden-risks-scaling-ai-too-fast/