The Internet Crime Complaint Center (IC3) arm of the Federal Bureau of Investigations (FBI) in the U.S. is warning of an increase in complaints involving the use of deepfakes and stolen personally identifiable information (PII) to apply for a variety of remote work and work-at-home positions.
Deepfakes make use of deep learning algorithms along with a video, image, or recording that has been altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.
In the cases identified by the FBI, voice spoofing is employed during online interviews for a wide range of IT jobs. The FBI noted the actions and lip movement of the person interviewed on camera do not completely coordinate with the audio of the person speaking. Actions such as coughing, sneezing, or other auditory actions are also not aligned with what is presented visually.
Deepfakes created using celebrities are now fairly common on the Internet. Pornography using the technology to create images has also becoming enough of an issue that the Law Commission of England and Wales is calling for new legislation that makes it a crime to create it using deep fakes without express permission from the individual portrayed. Deepfakes are also now being incorporated into social engineering attacks that, for example, resulted in the theft of $35 million after deepfake voice technologies were used to mimic the voice of a company director asking a bank to wire funds for an acquisition that never occurred. It’s now only a matter of time before these technologies are more widely employed as part of a business email compromise (BEC) attack, for example.
Fighting deepfakes with multifactor authentication
As deepfake technologies are employed more widely in BEC attacks, the need for more robust forms of multifactor authentication (MFA) should finally become more apparent. The challenge, of course, is that MFA has always been somewhat cumbersome to first implement and then difficult to get users to embrace. It adds steps to a process that many end users have shown themselves reluctant to embrace no matter how often credentials such as user names and passwords are stolen.
Fortunately, Google, Apple, and Microsoft via the FIDO Alliance are all committed to building support for passwordless sign-in capabilities into their platforms that are based on MFA. So, it should, hopefully, soon become easier for organizations to adopt MFA using an alternative to passwords based on a universal second factor (FIDO U2F), FIDO Universal Authentication Framework (FIDO UAF), and FIDO2, an associated set of specifications. The ultimate goal is to make it simple to authenticate access requests in real-time rather than relying on a password being matched to a database of secrets maintained by an internal IT team.
Unfortunately, it will still be a few years before that goal will be realized. In the meantime, social engineering attacks are going to become even more sophisticated as the quality of deepfakes steadily improves. It is, of course, going to remain difficult for a deepfake to accurately mimic individuals for any substantial amount of time. However, as most cybersecurity professionals know all too well, it already doesn’t take much for gullible end users to fall victim to relatively crude social engineering attacks that are about to become that much more convincing when deepfakes are included.
Mike Vizard has covered IT for more than 25 years and has edited or contributed to a number of tech publications including InfoWorld, eWeek, CRN, Baseline, ComputerWorld, TMCNet, and Digital Review. He currently blogs for IT Business Edge and contributes to CIOinsight, The Channel Insider, Programmableweb, and Slashdot. Mike also blogs about emerging cloud technology for SmarterMSP.