|
|
|
Amal Naitali, Mohammed Ridouani, Fatima Salahdine and Naima Kaabouch
Recent years have seen a substantial increase in interest in deepfakes, a fast-developing field at the nexus of artificial intelligence and multimedia. These artificial media creations, made possible by deep learning algorithms, allow for the manipulatio...
ver más
|
|
|
|
|
|
|
Lanting Li, Tianliang Lu, Xingbang Ma, Mengjiao Yuan and Da Wan
In recent years, voice deepfake technology has developed rapidly, but current detection methods have the problems of insufficient detection generalization and insufficient feature extraction for unknown attacks. This paper presents a forged speech detect...
ver más
|
|
|
|
|
|
|
Deeraj Nagothu, Ronghua Xu, Yu Chen, Erik Blasch and Alexander Aved
With the fast development of Fifth-/Sixth-Generation (5G/6G) communications and the Internet of Video Things (IoVT), a broad range of mega-scale data applications emerge (e.g., all-weather all-time video). These network-based applications highly depend o...
ver más
|
|
|
|
|
|
|
Haoxuan Qiu, Yanhui Du and Tianliang Lu
To protect images from the tampering of deepfake, adversarial examples can be made to replace the original images by distorting the output of the deepfake model and disrupting its work. Current studies lack generalizability in that they simply focus on t...
ver más
|
|
|
|
|
|
|
Zaynab Almutairi and Hebah Elgibreen
A number of AI-generated tools are used today to clone human voices, leading to a new technology known as Audio Deepfakes (ADs). Despite being introduced to enhance human lives as audiobooks, ADs have been used to disrupt public safety. ADs have thus rec...
ver más
|
|
|
|
|
|
|
Li Fan, Wei Li and Xiaohui Cui
Many deepfake-image forensic detectors have been proposed and improved due to the development of synthetic techniques. However, recent studies show that most of these detectors are not immune to adversarial example attacks. Therefore, understanding the i...
ver más
|
|
|
|