We are sharing this update from ACCA, our professional body, for the interest of clients and contacts. The content is (c) ACCA

Read the latest update from the National Cyber Security Centre

Action Fraud, the UK’s reporting body for fraud and cybercrime, has received increasing numbers of reports involving the use of AI-created synthetic content, also known as deep fake technology. This technology has been used to deceive, defraud, harass and extort victims through impersonation scams, online sexual blackmail and investment fraud.

Instead of a spoofed email or a text message, a deep fake allows a suspect to clone a target’s voice, to recreate their face and engage in personal communication with a target. Video and audio are not understood as communication media susceptible to manipulation as easily as an email; however, deep fake technology undermines this distinction.

Measures recommended to prevent potential victims from falling for deep fake/AI scams:

  • Suspicious contacts: If there are any doubts concerning a message or phone call, even if it originates from a recognised personal brand or individual, contact the individual or organisation directly to verify. This must be done using previously known contact details or those obtained from the official website.
  • Private and personal details should be kept confidential online: Financial institutions will not initiate contact to request the relocation of funds or ask for login information or bank details. If a situation appears suspicious, disconnect the communication
  • Investment opportunities: It is advised not to make investment decisions under haste or pressure. Legitimate organisations do not engage in practices that pressure individuals to invest on the spot.
  • Seek advice first: When considering significant financial decisions, consultation with trusted friends or family members or seeking professional independent advice is recommended.

Updated SMS and telephone guidance

The rise in artificial inflation of traffic (AIT) is leaving many businesses out of pocket. AIT is a technique used by criminals that generates large volumes of fake traffic through apps or websites. In a typical AIT scenario:

  • A fraudster uses a bot to create large numbers of fake accounts.
  • The fake accounts trigger a one-time passcode (OTP) SMS message to mobile numbers during multi-factor authentication (MFA).
  • The fraudster partners with a rogue party in the mobile ecosystem (an operator or aggregator) to intercept the AIT, but never actually delivers messages to the end user.
  • Together, the fraudster and the rogue party claim the profit.

To counter this growing threat, the National Cyber Security Centre has updated its SMS and telephone best practice guidance, which is designed to help organisations, and their customers, reduce exposure to SMS and telephone-related fraud.