Using CNN to Detect Fake News

Note: Recording: https://bit.ly/2PTPr24
Password: 4.M@=4rn
Speaker: Julian Ingham, Boston University

When: August 7, 2020 (Fri), 12:30PM to 01:30PM (add to my calendar)
Hosted by: Eric Boyers

This event is part of the Graduate Student Council Events.

Part of the student seminar series. A recording will be posted on the event page.

Abstract: Recent advances in machine learning techniques have made possible the creation of high quality falsified videos. One particular type of fake videos, commonly known as DeepFakes, has recently drawn especial attention. In a DeepFake, the faces of a target individual are replaced by the faces of a donor individual synthesized by a neural network, retaining the target’s facial expressions and head poses. Well-crafted DeepFakes can create highly convincing illusions of a person’s presence and activities -- threatening serious political, social, and legal consequences. This talk will report on some investigations into the task of discriminating between real video data and Deepfakes. A large volume of video data were processed into a set of still color images. A type of neural network known as a Convolutional Neural Network (CNN) was employed to detect artifacts of the forgery in the facial region, such as distortions and color blotches. Experimenting with different network architectures, we found the best results on our test set were highly accurate, among the top 100 in the Kaggle Deepfake Detection Challenge (out of 2,265 entrants). Further, we found that certain kinds of manipulation of the input data (rotation and dilatations) decrease predictive power, suggesting the presence of asymmetric correlations in Deepfake artifacts. We summarize preliminary additional experimentation with transfer learning.