Deepfakes started with the Video Rewrite program, created in 1997 by Christoph Bregler, Michele Covell, and Malcolm Slaney. The program altered existing video footage to create new content of someone mouthing words they didn't speak in the original version.
In late 2022, pro-China propagandists started spreading deepfake videos purporting to be from "Wolf News" that used synthetic actors. The technology was developed by a London company called Synthesia, which markets it as a cheap alternative to live actors for training and HR videos.
Most states have laws punishing revenge porn, but only four – California, New York, Georgia and Virginia – ban nonconsensual deepfakes.
TikTok had previously banned deepfakes that mislead viewers about real-world events and cause harm. Its updated guidelines say deepfakes of private figures and young people are also not allowed.
But despite some states taking steps forward, there is no federal law tackling deepfake porn, which means the ability to bring criminal or civil charges against an individual differs between states and certain illegal conduct in one state may not be illegal in another.
Australia has no specific legislation which addresses the potential misuse of deepfake technology, and we have yet to see a case concerning a deepfake reach the Australian judicial system. Other jurisdictions, however, have begun the process of legislating to address the potential for deepfakes to be misused.
One potential legal concern flowing from these fake images is defamation. A defamation cause of action could arise from an individual using FakeApp or similar software to create a fake video of an individual saying or doing something that would injure the individual's reputation if it were true.
But despite some states taking steps forward, there is no federal law tackling deepfake porn, which means the ability to bring criminal or civil charges against an individual differs between states and certain illegal conduct in one state may not be illegal in another.
If you don't agree to your image being used or manipulated, then it's wrong for someone to do so. It's a line that can be (and has been) easily turned into law — if you deepfake someone without their consent, then you risk a criminal charge. The illegality would certainly limit (if not stop) its use.
China has introduced first-of-its-kind regulations banning the creation of AI deepfakes used to spread fake news and impersonate people without consent.
In November, Intel announced its Real-Time Deepfake Detector, a platform for analyzing videos. (The term “deepfake” derives from the use of deep learning—an area of AI that uses many-layered artificial neural networks—to create fake content.)
Voice cloning, also known as deepfake or synthetic voices, creates cloned voices using AI technology and machine learning algorithms. It developed due to advancements in artificial intelligence (AI), especially deep learning.
Deepfakes can be harmful, but creating a deepfake that is hard to detect is not easy. Creating a deepfake today requires the use of a graphics processing unit (GPU). To create a persuasive deepfake, a gaming-type GPU, costing a few thousand dollars, can be sufficient.
As reported by local media Jiemian, Tencent Cloud's service can analyze and train itself on three-minute videos and 100 voice clips to produce a convincing deepfake video within 24 hours. The deepfake creation service costs roughly 1,000 yuan or $145.
Today, almost anyone can manipulate videos, audio, and images to make them look like something else. You don't need programming skills to create a deepfake. You can create it for free in less than 30 seconds using sites like my Heritage, d-id, or any of the many free deepfake applications.
Deepfake content is created by using two algorithms that compete with one another. One is called a generator and the other one is called a discriminator. The generator creates the fake digital content and asks the discriminator to find out if the content is real or artificial.
Lack of trust or ethics issues
If a marketer or brand uses a deepfake video, a consumer may feel manipulated by the campaign and not trust the brand in the future. For example, it's possible to use deepfakes to create a fake review, this would be considered unethical.
Deepfake can be used to spread disinformation, make propaganda or defame someone. People, especially political figures and celebrities may be at risk of having their identities stolen and included in fake news, which can lead to reputational damage and social unrest.
They represent different aspects of a common voice, and appear alongside key verses (such as Kanye for bipolar disorder, and Nipsey for murder). In that sense, Kendrick's video is a reminder that deepfake technology is just a tool, and can be useful for artistic expression in the right hands.
However, the technology isn't just for entertainment or fake news. As deepfake technology advances, cyber criminals are stealing identities to access or create online accounts and commit fraud.
The current legislation in India regarding cyber offences caused using deepfakes is not adequate to fully address the issue. The lack of specific provisions in the IT Act, 2000 regarding artificial intelligence, machine learning, and deepfakes makes it difficult to effectively regulate the use of these technologies.
It may not be ethical and downloaders might be sued for copyright infringement, but there are no laws that criminalise Australians downloading and watching content for their own individual use.
The Copyright Act 1968 (Cth) governs copyright law in Australia and sets out strict penalties for infringement. Under the Act, it is illegal to reproduce, adapt or communicate copyrighted material without the permission of the copyright owner. This includes downloading or sharing copyrighted material online.
This makes Deepfake a serious threat. While sometimes Deepfakes can be used for fun (for example people make Deepfakes online with deep fake apps to create memes), still the technology and those apps itself can be used by cybercriminals to do serious harm.