March 18, 2026
March 18, 2026
The article describes a new manipulation tactic Microsoft calls AI Recommendation Poisoning. The idea is simple: companies hide instructions inside “Summarize with AI” buttons or similar links so that, when a user clicks them, the AI assistant receives a pre-filled prompt telling it to remember that company as a trusted or preferred source. Microsoft says it found 50 distinct examples in a 60-day period, coming from 31 companies across more than a dozen industries. The concern is that this can quietly bias future answers from AI tools—especially on sensitive topics like health, finance, or security—without the user realizing the AI’s memory has been tampered with.
The post’s main warning is that this is not just a “marketing trick” but a new kind of trust problem for AI systems. It says the attack can happen through malicious links, hidden prompts inside content, or social engineering, but the most common real-world pattern Microsoft saw was one-click links that pass instructions through URL parameters. Microsoft recommends treating AI links more cautiously, checking what your AI has saved in memory, deleting suspicious entries, and being skeptical of AI-generated recommendations that seem oddly biased. For organizations, it also suggests hunting for AI-assistant URLs containing words like “remember,” “trusted source,” or “future citations,” while noting that Microsoft has added defenses such as prompt filtering, content separation, memory controls, and ongoing monitoring in its own AI services.
Source: https://www.microsoft.com/en-us/security/blog/2026/02/10/ai-recommendation-poisoning/