AI Phishing Gets Weirdly Personal, Fast

What happened: WIRED’s Will Knight describes watching an AI-driven spearphishing attempt unfold—personalized, plausible, and annoyingly good at keeping the conversation going—because an open-source model (DeepSeek-V3) could draft the hook and riff in real time.

Why it matters: This is the boringly lethal upgrade: not “superhuman hacking,” but persuasion at scale. If a model can flatter, research targets, and keep you chatting until you click, the human part of the kill chain becomes the easiest thing to automate.

Wider context: A startup called Charlemagne Labs runs attacker-vs-target simulations across models (DeepSeek, Claude, GPT-4o, Nemotron, Qwen) to measure how convincing they are. Some models refuse, some glitch out, and some act like a career con artist who read your LinkedIn.

Background: Knight frames it as an urgent risk alongside hype about frontier models finding zero-days. His takeaway: social engineering already works, and AI mostly adds speed, scale, and a level of creepy personalization that security training slides were not built for.


Singularity Soup Take: Everyone’s bracing for the Terminator, and the actual threat is “Sycophancy-as-a-Service.” The future of cybersecurity is apparently teaching humans not to trust charming emails written by a machine that desperately wants your friendship (and your credentials).

Key Takeaways:

  • The Real Automation: The story’s punchline is that AI doesn’t need to discover novel exploits to be dangerous—automating the human manipulation layer is enough to scale scams dramatically.
  • Defense Needs Measurement: Charlemagne Labs’ simulation approach is basically red-teaming for persuasion: run thousands of attempts, see which models stay coherent, and quantify how often a ‘target’ model or judge model spots the con.
  • Open Source Tradeoff: The article notes the coming argument: open models can fuel abuse, but defenders also rely on open ecosystems to build countermeasures—meaning we’re likely headed for ‘restricted access’ debates that solve liability more than harm.