28 July 2025

Research pick: Detecting deepfakes - "Comparative study of CNN models for detecting altered and manipulated images"

As artificial intelligence (AI) continues to transform how images and video are created, concerns about the manipulation of digital content have grown. Research in the International Journal of Forensic Engineering discusses how AI can be used to fight back and find the fakes.

Deepfakes, videos or images that have been manipulated using AI to depict events that never occurred and words that were never uttered, pose a growing threat to public discourse, politics, criminal investigations, and scientific integrity. These convincing forgeries are typically generated by a form of AI known as Generative Adversarial Networks (GANs). In a GAN, two neural networks compete: one creates realistic fake images, while the other attempts to identify the deception. As they evolve, so the system becomes adept at producing forgeries that are difficult even for experts to distinguish from authentic content.

Such technologies have not only led to blurred lines between real and artificial but also raised critical questions for journalism, law enforcement, and scientific publishing, where the authenticity of visual evidence can have serious consequences. As the manipulation of media becomes more sophisticated, traditional detection methods, such as spotting anomalies in shadows or unnatural features, have proven inadequate.

The work in IJFE has repurposed a relatively simple machine-learning model to detect deepfakes with high accuracy. The researchers tested three convolutional neural network (CNN) architectures, ResNet-50, AlexNet, and LeNet-5, to determine which was most effective in distinguishing genuine images from AI-generated fakes.

CNNs are particularly good at analysing visual data. They function by identifying patterns, textures, and other features that may not be visible to the human eye. Although newer models like ResNet-50 are often assumed to be more powerful due to their increased sophistication, the researchers found that LeNet-5, a comparatively basic model developed in the 1990s, initially performed best.

The team was able to enhance this CNN through architectural fine-tuning and parameter optimisation, so that their modified version of the LeNet-5 model could reach almost 96 percent accuracy. It also had a demonstrable Area Under Curve (AUC) score of almost 99 percent, meaning that statistically it performed well across a range of test scenarios. It is worth noting that this system does not rely on prior knowledge about an image’s content nor the embedded metadata.

Of course, as researchers are developing the tools to detect the fakes, so those who develop the tools to create them in the first place will respond with increasing sophistication to outwit the detectors.

Pawade, V.S. and Mathur, S. (2025) ‘Comparative study of CNN models for detecting altered and manipulated images’, Int. J. Forensic Engineering, Vol. 5, No. 3, pp.216–227.

No comments: