The advent of the Internet has changed the way people access and share information, making it easier for malicious individuals to spread biased, unreliable, or false news. Recent technological developments, including artificial intelligence models that can generate realistic texts, audio recordings and images, are contributing to this wave of misinformation.
In recent years, distinguishing between real and fake news has become increasingly difficult, creating the perfect breeding ground for ignorance, confusion and polarization. Therefore, developing effective tools to quickly identify and remove online fake news from popular websites and search engines is crucial.
Researchers at National Yangming Chiao Tung University, Zhonghua University and National Ilan University recently developed a new multi-modal model that can help quickly detect fake news online. This model,is presented in the paper published in Advancement of scienceit can identify fake news by processing textual and visual data, rather than processing a single type of data.
“The existing literature primarily focuses on the analysis of single features in fake news, and neglects the recognition of multi-modal feature fusion,” Szu-Yin-Lin, Wen-Qiu Chen and their colleagues wrote in their paper.
“Compared to single-modal approaches, multimodal fusion allows for more comprehensive and enriched information to be captured from different data modalities (such as text and images), thus improving the performance and effectiveness of the model. This study proposes a model that uses multimodal fusion to identify fake news, with the aim of reducing misinformation.” “
To improve fake news detection, Lin, Chen and their colleagues set out to develop an alternative model that would simultaneously analyze the textual and visual features of online news. The model they developed starts with cleaning the data, then extracting these features from the clean data.
The researchers’ model combines textual and visual information using different merging strategies, including early merging, co-merging, and late merging techniques. In initial tests, this multi-modal approach has been shown to perform remarkably well, detecting fake news better than well-established single-modality techniques, including BERT.
The team’s multimodal model was tested on Gossopcop and Fakeddit datasets, both of which are often used to train models to detect fake news. In these two datasets, single-method models were previously found to be able to detect fake news with unsatisfactory accuracy of 72% and 65%, respectively.
“The proposed framework processes textual and visual information through data cleaning and feature extraction before classification,” Lin, Chen and their colleagues wrote. “Fake news is classified by a model that achieves 85% and 90% accuracy on the Gossipcop and Fakeddit datasets, with F1 scores of 90% and 88%, demonstrating its performance.
“The study presents results across different training periods, demonstrating the effectiveness of multimodal fusion in combining text and image recognition to combat fake news.”
The promising results collected by Lin, Chen, and colleagues highlight the potential of multimodal fusion models for fake news detection. They can thus encourage other teams to develop similar models based on multiple methods.
In the future, the new model can also be tested on more datasets and real-world data. Ultimately, it could contribute to global efforts to address and reduce online misinformation.
More information:
Szu-Yin Lin et al., A multimodal fusion model of text and image to enhance fake news detection, Advancement of science (2024). doi: 10.1177/00368504241292685
© 2024 Web of Science
Quotation: Alternative Model Can Identify Fake News by Processing Textual and Visual Data (2024, November 4) Retrieved November 4, 2024 from
This document is subject to copyright. Notwithstanding any fair dealing for the purpose of private study or research, no part may be reproduced without written permission. The content is provided for information purposes only.