With deepfake technology, bad actors can impersonate others and gain access to sensitive data. Learn more about this threat to cybersecurity and how to prevent it.
Identity Theft and Financial Fraud
Deepfake technology can be used to create new identities and steal the identities of real people. Attackers use the technology to create false documents or fake their victim's voice, which enables them to create accounts or purchase products by pretending to be that person.
The threat of Deepfakes and synthetic media comes not from the technology used to create it, but from people's natural inclination to believe what they see, and as a result deepfakes and synthetic media do not need to be particularly advanced or believable in order to be effective in spreading mis/disinformation.
With deepfake technology, bad actors can impersonate others and gain access to sensitive data. Learn more about this threat to cybersecurity and how to prevent it. Business email compromise (BEC) and other spear phishing attacks have long been a favorite for bad actors looking to steal cash from unsuspecting victims.
Deepfake can depict a person as indulging in antisocial behaviors and saying vile things that they never did. Even if the victim could debunk the fake via alibi or otherwise, that fix may come too late to remedy the initial harm.
TikTok had previously banned deepfakes that mislead viewers about real-world events and cause harm. Its updated guidelines say deepfakes of private figures and young people are also not allowed.
He says, “A person could be liable for using deepfake technology to infringe another entity's intellectual property rights or a person's publicity or privacy rights. And the technology can itself be protected by intellectual property rights.
Deepfakes in general are not illegal.
The rules update also prohibits deepfakes of private figures and young people, while synthetic media featuring public figures will still be allowed, under certain restrictions -- abuse, political misinformation and commercial endorsements will be prohibited.
One potential legal concern flowing from these fake images is defamation. A defamation cause of action could arise from an individual using FakeApp or similar software to create a fake video of an individual saying or doing something that would injure the individual's reputation if it were true.
Australia has no specific legislation which addresses the potential misuse of deepfake technology, and we have yet to see a case concerning a deepfake reach the Australian judicial system. Other jurisdictions, however, have begun the process of legislating to address the potential for deepfakes to be misused.
While there are legitimate uses for deepfakes, they can also put businesses at risk for exploitation and fraud, and pose a significant threat to authentication technologies.
Deepfakes, or automatically created films and images, can be utilized for a variety of purposes, including entertainment and teaching. They can, however, be used to propagate misinformation, impersonate others, and perpetrate other sorts of abuse.
Cybercriminals Can Also Use AI
Advanced AI can also technically compromise facial recognition technology to further commit fraud or theft. To ensure AI-based cybersecurity stays ahead of AI-based attacks, it will need to be regularly updated to learn new attack methods.
Malware is the most common type of cyberattack, mostly because this term encompasses many subsets such as ransomware, trojans, spyware, viruses, worms, keyloggers, bots, cryptojacking, and any other type of malware attack that leverages software in a malicious way.
So, for now the use of deepfakes in a parody show like Neighbour Wars is perfectly legal, but it's possible that the law may change in the coming years and place more restrictions on how deepfake technology can be used in entertainment.
China has introduced first-of-its-kind regulations banning the creation of AI deepfakes used to spread fake news and impersonate people without consent.
The only states with legislation concerning deepfakes are Virginia, Texas, and California. Virginia's and most of California's legislation refers directly to pornographic deepfakes, and Texas's and some of California's legislation refers to a specific subset of informational deepfakes.
However, the technology isn't just for entertainment or fake news. As deepfake technology advances, cyber criminals are stealing identities to access or create online accounts and commit fraud.
There are no specific laws protecting victims of nonconsensual deepfake pornography, and new proposals will fall short. The Digital Services Act (DSA) obliges platforms to demonstrate the procedures by which illegal content can be reported and taken down.
Internet censorship and surveillance has been tightly implemented in China that block social websites like Gmail, Google, YouTube, Facebook, Instagram, Twitter, and others. The excessive censorship practices of the Great Firewall of China have now engulfed the VPN service providers as well.
In fact, they permit them in most instances. Deepfakes likely fall under the “fair use” exception to copyright infringement.
It infringes on aspects of intellectual property rights
This can include books, paintings, films, and computer programs. Concerns regarding intellectual property rights happen when someone under the guise of deepfake technology poses as the person owning said intellectual property.
Impersonating a money manager and calling about a money transfer has been a popular scam for years, and now criminals can use deepfakes in video calls. For example, they could impersonate someone and contact their friends and family to request a money transfer or ask for a simple top-up in their phone balance.