Peer-reviewed, meta-analysis, data
News — Realistic images created by artificial intelligence (AI), including those generated from a text description and those used in video, pose a genuine threat to personal security. From identity theft to misuse of a personal image, spotting what’s real and what’s fake is getting harder and harder.
A research collaboration involving the University of , has developed an innovative solution to accurately distinguish between fake and genuine images, as well as identify the source of the artificial image.
The ’, combines three advanced AI techniques, which are binary classification, ensemble learning, and multi-class classification. These methods enable the AI to learn from labelled data, making smarter and more reliable predictions.
It is a tool that can be used to investigate and prosecute criminal activity such as fraud, or by the media to ensure images used in their stories are authentic to prevent misinformation or unintentional bias.
DeepGuard has been developed by a research team led by Dr Gueltoum Bendiab and Yasmine Namani from the Department of Electronics at the in Algeria, and involving from the University’s PAIDS Research Centre and
Dr Shiaeles said: “With ever evolving technological capabilities it will be a constant challenge to spot fake images with the human eye. Manipulated images pose a significant threat to our privacy and security as they can be used to forge documents for blackmail, undermine elections, falsify electronic evidence and damage reputations, and can even be used to incite harm, by adults, to children. People are also profiteering disingenuously on social media platforms like TikTok where images of models are being turned into characters and animated in different scenarios in games or for entertainment.
“DeepGuard, and future iterations, should prove to be a valuable security measure for verifying images, including those in videos, in a wide range of contexts.”
, published in The Journal of Information Security and Applications, will also support further academic research in this area, with .
During its development, the team reviewed and analysed methods for both image manipulation and detection, focusing specifically on fake images involving facial and bodily alterations. They considered 255 research articles published between 2016 and 2023 that examined various techniques for detecting manipulated images - such as changes in expression, pose, voice, or other facial or bodily features.
ENDS
For more information: Diana Leahy, PR and Press Officer, University of Portsmouth. Email: [email protected]
Notes to editors
Anyone who has access to an image or images that they believe to be connected to illegal activity can contact the or, if it is a child or adult safeguarding issue, to contact their local Police Constabulary on the non-emergency 101 telephone number. If a child or adult is in immediate danger of harm, the emergency 999 number should be used.
About the University of Portsmouth
The is a progressive and dynamic university with an outstanding reputation for innovative teaching, outstanding learning outcomes and globally significant research and innovation.
We’re proud to have a 5 star rating in the QS World University Rankings and one of the top 5 Young Universities in the UK based on Times Higher Education Young University rankings.
Our impacts lives today and in the future. Researchers work closely with business, industry and the public sector to solve local, national and global challenges across science, technology, humanities, business and creative industries.
Our world-class research is validated by our where Portsmouth was ranked third of all modern UK universities for research power in the Times Higher Education REF rankings.
Follow on X | Read news at | Listen to the podcast on Acast | Find out what’s on at