Deepfake content is created by using two algorithms that compete with one another. One is called a generator and the other one is called a discriminator. The generator creates the fake digital content and asks the discriminator to find out if the content is real or artificial.
But despite some states taking steps forward, there is no federal law tackling deepfake porn, which means the ability to bring criminal or civil charges against an individual differs between states and certain illegal conduct in one state may not be illegal in another.
Deepfakes can be harmful, but creating a deepfake that is hard to detect is not easy. Creating a deepfake today requires the use of a graphics processing unit (GPU). To create a persuasive deepfake, a gaming-type GPU, costing a few thousand dollars, can be sufficient.
As reported by local media Jiemian, Tencent Cloud's service can analyze and train itself on three-minute videos and 100 voice clips to produce a convincing deepfake video within 24 hours. The deepfake creation service costs roughly 1,000 yuan or $145.
Several methods can be used to detect audio deepfakes with biometrics: Spectral analysis. It consists of an audio signal analysis to detect voice patterns. Deep-learning algorithms that analyse an individual's voice and recognise unique characteristics that are difficult to replicate in deepfakes.
Australia has no specific legislation which addresses the potential misuse of deepfake technology, and we have yet to see a case concerning a deepfake reach the Australian judicial system. Other jurisdictions, however, have begun the process of legislating to address the potential for deepfakes to be misused.
TikTok had previously banned deepfakes that mislead viewers about real-world events and cause harm. Its updated guidelines say deepfakes of private figures and young people are also not allowed.
Cybersecurity experts say deepfake technology has advanced to the point where it can be used in real time, enabling fraudsters to replicate someone's voice, image and movements in a call or virtual meeting. The technology is also widely available and relatively easy to use, they say.
Often, they inflict psychological harm on the victim, reduce employability, and affect relationships. Bad actors have also used this technique to threaten and intimidate journalists, politicians, and other semi-public figures. Furthermore, cyber criminals use deepfake technology to conduct online fraud.
Tech companies like Intel are leading the pack. In November 2022, it released FakeCatcher, a cloud-based tool that it claims can accurately detect fake videos 96% of the time. FakeCatcher uses AI to analyze the blood flow of humans in videos, using up to 72 different detection streams.
If you don't agree to your image being used or manipulated, then it's wrong for someone to do so. It's a line that can be (and has been) easily turned into law — if you deepfake someone without their consent, then you risk a criminal charge. The illegality would certainly limit (if not stop) its use.
It may not be ethical and downloaders might be sued for copyright infringement, but there are no laws that criminalise Australians downloading and watching content for their own individual use.
This makes Deepfake a serious threat. While sometimes Deepfakes can be used for fun (for example people make Deepfakes online with deep fake apps to create memes), still the technology and those apps itself can be used by cybercriminals to do serious harm.
Illegal streaming site 123movies is banned in Australia. Man last night.
R 18+ restricted
R 18+ films are legally restricted. R 18+ films are not suitable for people aged under 18 and people screening R 18+ content must not allow those aged under 18 to view the content.
Dozens of illegal streaming sites including 123movies are BLOCKED in Australia in a major court decision to limit piracy. The Federal Court has ordered internet service providers to ban 63 illegal streaming websites in Australia.
High-quality DeepFakes are not easy to discern, but with practice, people can build intuition for identifying what is fake and what is real. You can practice trying to detect DeepFakes at Detect Fakes.
While there are various ways one can spot deepfake images, deepfake videos tend to share two main features: unnatural eye movements, and audio that is very often out of sync with the person's mouth movements.
Deepfakes started with the Video Rewrite program, created in 1997 by Christoph Bregler, Michele Covell, and Malcolm Slaney. The program altered existing video footage to create new content of someone mouthing words they didn't speak in the original version.
Intel's deepfake detector analyzes 'blood flow' in video pixels to return results in milliseconds with 96% accuracy.
Deepfake content is created by using two algorithms that compete with one another. One is called a generator and the other one is called a discriminator. The generator creates the fake digital content and asks the discriminator to find out if the content is real or artificial.
Deepfakes Web offers a free and paid version. The free takes around 5 hours to generate a video, while the premium version, which costs $3 per hour, churns out a video in just 1 hour. This tool uses powerful GPUs on the cloud but still takes a lot of time to render all the data perfectly.