Deepfakes started with the Video Rewrite program, created in 1997 by Christoph Bregler, Michele Covell, and Malcolm Slaney.
I don't define something as viral when everyone in the world watches it. I think something goes viral when the people who watch it end up watching it over and over again.” This project is the brainchild of Chris Ume, a VFX and AI artist, who created this deep fake technology, alongside Fisher, an actor and singer.
But despite some states taking steps forward, there is no federal law tackling deepfake porn, which means the ability to bring criminal or civil charges against an individual differs between states and certain illegal conduct in one state may not be illegal in another.
In the late '90s, an academic paper that explored the deepfake concept laid out a program that would be the first instance of what we would call deepfake technology today. It drew upon earlier work done around analyzing faces, synthesizing audio from text, and then modeling the actions of the human mouth in 3D space.
Deepfake technology has been developed by researchers at academic institutions beginning in the 1990s, and later by amateurs in online communities. More recently the methods have been adopted by industry.
TikTok had previously banned deepfakes that mislead viewers about real-world events and cause harm. Its updated guidelines say deepfakes of private figures and young people are also not allowed.
Australia has no specific legislation which addresses the potential misuse of deepfake technology, and we have yet to see a case concerning a deepfake reach the Australian judicial system. Other jurisdictions, however, have begun the process of legislating to address the potential for deepfakes to be misused.
One potential legal concern flowing from these fake images is defamation. A defamation cause of action could arise from an individual using FakeApp or similar software to create a fake video of an individual saying or doing something that would injure the individual's reputation if it were true.
If you don't agree to your image being used or manipulated, then it's wrong for someone to do so. It's a line that can be (and has been) easily turned into law — if you deepfake someone without their consent, then you risk a criminal charge. The illegality would certainly limit (if not stop) its use.
Jerome LeBlanc: Becoming a Top Gun Maverick impersonator first started off as a way to make ends meet while I was on a school visa. I was then noticed by several entertainment agencies – and after doing many events, I created a company called: “California Tom Cruise”.
LeBlanc, who blessed with good looks and appears uncannily like the real Tom Cruise, had been able to find work impersonating the Top Gun actor and built up his Instagram following over the course of six years.
"Tom and I were taking guitar classes together because we both had to learn how to play guitar, and it was pretty surreal jamming out with Tom singing 'Every Rose Has Its Thorn,' " Boneta recalled. "He was playing the solo, and I was playing all the chords and stuff."
In November, Intel announced its Real-Time Deepfake Detector, a platform for analyzing videos. (The term “deepfake” derives from the use of deep learning—an area of AI that uses many-layered artificial neural networks—to create fake content.)
A rush of new research has introduced several deepfake video-detection (DVD) methods. Some of these methods claim detection accuracy in excess of 99 percent in special cases, but such accuracy reports should be interpreted cautiously.
China has introduced first-of-its-kind regulations banning the creation of AI deepfakes used to spread fake news and impersonate people without consent.
They represent different aspects of a common voice, and appear alongside key verses (such as Kanye for bipolar disorder, and Nipsey for murder). In that sense, Kendrick's video is a reminder that deepfake technology is just a tool, and can be useful for artistic expression in the right hands.
Often, they inflict psychological harm on the victim, reduce employability, and affect relationships. Bad actors have also used this technique to threaten and intimidate journalists, politicians, and other semi-public figures. Furthermore, cyber criminals use deepfake technology to conduct online fraud.
Illegal and restricted online content includes material that shows or encourages child sexual abuse, terrorism or other extreme violence. eSafety can direct an online service or platform to remove illegal content or ensure that restricted content can only be accessed by people who are 18 or older.
The Copyright Act 1968 (Cth) governs copyright law in Australia and sets out strict penalties for infringement. Under the Act, it is illegal to reproduce, adapt or communicate copyrighted material without the permission of the copyright owner. This includes downloading or sharing copyrighted material online.
It may not be ethical and downloaders might be sued for copyright infringement, but there are no laws that criminalise Australians downloading and watching content for their own individual use.
This makes Deepfake a serious threat. While sometimes Deepfakes can be used for fun (for example people make Deepfakes online with deep fake apps to create memes), still the technology and those apps itself can be used by cybercriminals to do serious harm.
As facial recognition software is increasingly used to unlock smartphones and computers, to name just a few use cases, Deepfakes will make it possible to achieve true facial recognition.
Deepfake and synthetic voices are two AI techniques used to generate artificial voices from a real person's voice. Deepfake voices are created by training machine learning models on large amounts of audio data to mimic a specific speaker.