3 October 2025

Research pick: I heard a rumour - "A deep learning model with effective tokenisation and feature extraction for detection of rumours in online social networks"

Rumours circulate rapidly on social media, it can have serious detrimental effects on the individuals, groups or companies named. Those spreading and sharing the rumours rarely worry about their veracity, it seems. This tangible societal challenge has implications spanning public opinion, social stability, and financial markets.

Research in the International Journal of Internet Manufacturing and Services has looked in details at the effects of uncertain truths, deceived wisdom, conspiracy theories, and fake news and finds that conventional moderation strategies are alarmingly inadequate. This inadequacy can lead to the crash of stock markets, companies failing, and the ruination of individuals. It can even kill when it’s medical fake news that goes viral.

Young users, particularly those under 26, who are often the most active on social media and the most easily influenced, are especially prone to sharing unverified claims, so amplifying the societal impact of such unchallenged disinformation and misinformation.

The current research offers a computational model designed to automatically detect rumours on social media, with a particular focus on the well-known microblogging platforms. Previous efforts to automate this process often relied on conventional machine learning techniques or convolutional neural networks (CNNs) with all their limitations, so the new approach instead builds on long short-term memory (LSTM) networks. LSTMs are a type of deep learning algorithm specifically designed to capture temporal relationships in sequential data. In practical terms, LSTMs can analyse how information evolves across posts over time, rather than treating each post as an isolated unit.

When tested on a standard dataset from a well-known microblogging platform, the model demonstrated an accuracy of 99.86%. This, the team says, is a significant improvement over earlier methods. The approach could enable near-instantaneous identification of misleading or unverified content. This would allow the platforms to mitigate the propagation of flagged misinformation.

Mallick, C., Mishra, S., Das, S. and Paikaray, B.K. (2025) ‘A deep learning model with effective tokenisation and feature extraction for detection of rumours in online social networks’, Int. J. Internet Manufacturing and Services, Vol. 11, No. 2, pp.93–113.

No comments: