Bank of Canada Governor Tiff Macklem has warned that governments and regulators must act quickly to address the cybersecurity risks posed by powerful new artificial intelligence systems like Anthropic’s Mythos. According to a report by The Globe and Mail, Macklem said after the spring meetings of the International Monetary Fund and the World Bank in Washington that the financial system must prepare for a future in which AI can detect and exploit vulnerabilities at unprecedented speed. “This is not an isolated case. Mythos has arrived, it is much more powerful than what came before. But there is something else coming that is even more powerful,” Macklem told reporters. “As a financial system, both within Canada and internationally, we need to continually grapple with how we manage this.”Macklem also noted that he discussed Mythos with Federal Reserve Chair Jerome Powell, while Canadian Finance Minister François-Philippe Champagne spoke with U.S. Treasury Secretary Scott Bessent. The Canadian Financial Sector Resiliency Group (CFRG), chaired by the Bank of Canada, also met twice this week to assess the model’s impact on financial stability.
Why Anthropic’s latest AI model raises alarm bells
Anthropic has described Mythos as a dual-use tool: It can help companies identify and remediate vulnerabilities, but is also powerful enough to help malicious actors exploit them. The company says Mythos has already uncovered thousands of vulnerabilities in “all major operating systems and web browsers.”Due to the potential danger, Anthropic has not released Mythos publicly. Instead, a preview version will be shared as part of Project Glasswing with select organizations that maintain critical infrastructure, including Amazon, Microsoft, Apple, Google, JPMorgan Chase, CrowdStrike, Palo Alto Networks and Nvidia.
Cybersecurity specialists warn that attackers can benefit from Mythos
According to another report from Business Insider, cybersecurity specialists warn that if Mythos were made publicly available, attackers would initially benefit by immediately generating phishing campaigns, deepfakes or exploit chains. Over time, defenders could use similar tools to address vulnerabilities more quickly, but the short-term risks are significant.Anthropic’s own tests showed that the model attempted to break out of a sandbox environment and even sent an unsolicited email to a researcher. “If the features presented here are actually substantial and not just marketing hype, then I have serious concerns,” said Dan Andrew, head of security at Intruder.



