A London-based multinational design and engineering firm fell victim to one of the most sophisticated videoconference fraud schemes on record when employees authorized a $25.6 million wire transfer in February 2024 — all based on conversations with people who did not exist.
The Arup employee, working from a Hong Kong office, was contacted via email by someone claiming to be the company's Chief Financial Officer, requesting participation in a confidential financial transaction. When the victim joined the scheduled video call, they encountered what appeared to be senior company officials and colleagues — all discussing urgent fund transfers that required immediate authorization.
Unbeknownst to the employee, every other participant in that videoconference was an artificial creation. Scammers had used publicly available video footage of actual Arup staff members to generate synthetic deepfake avatars, complete with realistic facial movements, expressions, and voices. Over approximately one week, the fraudsters conducted 15 separate transactions, routing funds to five different bank accounts while maintaining the illusion of legitimate internal communications.
**When AI Meets Social Engineering**
The scheme represents a significant escalation in fraud sophistication. Traditional CEO fraud — where criminals impersonate executives to pressure financial transfers — has cost companies billions globally. But this case introduced a new dimension: rather than relying on email alone or a single impersonated voice call, the perpetrators created an entire false ecosystem of company personnel, all visible and audible in real time.
What made the scam particularly effective was its exploitation of trust hierarchies within large organizations. Arup, founded in 1946, operates across multiple continents with thousands of employees. The victim had no reason to question seeing and hearing what appeared to be their CFO and colleagues in an internal video meeting — the very technology that has become standard in global corporate operations.
The fraud remained undetected for approximately seven days. It was only when other Arup staff members noticed irregular transactions that alarm bells sounded. Upon investigation, the company confirmed that the video conference participants — both visually and vocally — were entirely fabricated.
**Law Enforcement Response and Arrests**
Hong Kong police initiated formal investigations into the fraud, treating it as a serious financial crime. As part of related operations targeting deepfake-based identity fraud in the region, authorities arrested six individuals suspected of involvement in similar schemes. The investigation highlighted how Southeast Asian criminal networks have begun adopting deepfake technology as a tool for large-scale financial crime.
**The Broader Deepfake Threat**
This case arrives amid growing warnings from cybersecurity experts and corporate fraud prevention specialists about the convergence of AI technology and traditional social engineering tactics. Unlike previous deepfake cases that focused on single individuals or political influence, the Arup incident demonstrated the feasibility of creating multiple synthetic participants simultaneously — dramatically increasing the psychological pressure on victims and reducing opportunities for verification.
For international businesses, particularly those with dispersed workforces across time zones, the implications are significant. Video conferencing has become so normalized that questioning the authenticity of a call from apparent colleagues and supervisors may not occur to employees under time pressure and presented with apparent organizational authority.
Security experts now recommend that companies implement multi-factor verification protocols for any significant financial transfers, regardless of video call confirmation. Some organizations have begun requiring in-person or secondary authentication through previously established communication channels for transactions above certain thresholds.
The case underscores how deepfake technology has moved beyond being a theoretical threat or a tool for creating misleading media content. It has become a practical weapon in organized fraud operations, particularly when combined with knowledge of corporate structures and access to training data in the form of public videos of real employees.
As artificial intelligence becomes more sophisticated and accessible, security specialists warn that this case may represent the beginning of a new category of enterprise fraud that will require fundamental changes to how organizations verify the authenticity of internal communications.