One of the most popular use cases of deepfakes is creating dubbed videos of people appearing to be speaking several different languages. One such example is soccer player David Beckham speaking up to nine different languages to spread awareness about malaria and how the disease can be brought under control.
Nostalgic Ad Campaigns
State Farm created one famous example of deepfake technology. The insurance company created an ad for the series The Last Dance by superimposing 1998 Sportscenter footage to make it look like Kenny Mayne predicted the documentary.
An organization thought they hired a remote employee to provide technical support. Instead, they hired a criminal who created a false persona using deepfake technology and stolen personal identifiable information with the intent of gaining access to the company's network and data.
Instead of their real faces on camera, deepfakes use an AI-generated avatar. People use deepfake technology for research purposes in medicine to create realistic simulations of a real-world situations that can be used for training purposes.
Real-time deepfakes have been used to scare grandparents into sending money to simulated relatives, win jobs at tech companies in a bid to gain inside information, influence voters and siphon money from lonely men and women.
Deepfake media (deepfakes) threaten public trust in video and present challenges for law enforcement with new types of investigations, evidence management, and trials. Deepfake media have already been used to commit crimes from harassment to fraud, and their use in crimes will likely expand.
TikTok more clearly spells out its policy, saying all deepfakes or manipulated content that show realistic scenes must be labeled to indicate they're fake or altered in some way. TikTok had previously banned deepfakes that mislead viewers about real-world events and cause harm.
Deepfake videos are legal. However, depending on what is contained in the video they could potentially breach legal codes. For example, if they are pornographic face-swap videos or photos, the victim will be able to claim defamation or copyright.
Using technology
In processes that require more thorough verification, deepfake detection software or online life detection systems (e.g., taking a selfie or video link in real time) can be used.
There are several methods for creating deepfakes, but the most common relies on the use of deep neural networks that employ a face-swapping technique. You first need a target video to use as the basis of the deepfake and then a collection of video clips of the person you want to insert in the target.
Deepfakes (portmanteau of "deep learning" and "fake") are synthetic media that have been digitally manipulated to replace one person's likeness convincingly with that of another. Deepfakes are the manipulation of facial appearance through deep generative methods.
These abuses might have negative effects on a person's reputation and undermine public confidence in digital media. For instance, even if someone is ultimately found to be innocent, their reputation might suffer if a deepfake video is used to portray them as having committed a crime.
In general, the development of this type of synthetic media can be traced back to the 1990s. But deepfake technology as we know it today often relies on GANs, and GANs didn't exist until 2014 when they were invented by computer scientist Ian Goodfellow. The word “deepfake” emerged in 2017.
Australia has no specific legislation which addresses the potential misuse of deepfake technology, and we have yet to see a case concerning a deepfake reach the Australian judicial system. Other jurisdictions, however, have begun the process of legislating to address the potential for deepfakes to be misused.
Deepfakes make it even easier for fraudsters to commit identity theft. By mimicking the images and voices of customers or staff, deepfakes can fool your business into granting account access, authorizing purchases, transferring funds, and more.
If you don't agree to your image being used or manipulated, then it's wrong for someone to do so. It's a line that can be (and has been) easily turned into law — if you deepfake someone without their consent, then you risk a criminal charge. The illegality would certainly limit (if not stop) its use.
Deepfake audio poses an increasing threat to voice-based authentication systems. However, creating a deepfake voiceprint requires sample data. Consequently, individuals with extensively available voice samples, such as celebrities and politicians, are particularly vulnerable to this tactic.
Lack of trust or ethics issues
If a marketer or brand uses a deepfake video, a consumer may feel manipulated by the campaign and not trust the brand in the future. For example, it's possible to use deepfakes to create a fake review, this would be considered unethical.
The FBI is warning the public about the use of “deepfakes” to harass or blackmail targets with fake sexually explicit photos or videos of them.
The researchers found that most application programming interfaces that use facial liveness verification — a feature of facial recognition technology that uses computer vision to confirm the presence of a live user — don't always detect digitally altered photos or videos of individuals made to look like a live version ...
A report from the University College of London (UCL) listed deepfakes as the most severe AI crime threat to date. Apart from the threats of spoofing, experts point to fake audio and video content with extortion applications.
Pros Deepfake Technology
One of the biggest advantages of deepfakes is that they can be used to create high-quality entertainment content. For example, deepfakes can be used to create realistic characters in movies, TV shows, and video games.
One of the first introductions to deepfakes for audiences around the world was in the 1994 film Forrest Gump, when the titular character meets President John F Kennedy.