23 March 2021

Research pick: Spotting and stopping online abuse - "AI to prevent cyber-violence: harmful behaviour detection in social media"

Social media has brought huge benefits to many of those around the world with the resources to access its apps and websites. Indeed, there are billions of people using the popular platforms every month in almost, if not, every country of the world. Researchers writing in the International Journal of High Performance Systems Architecture, point out that as with much in life there are downsides that counter the positives of social media. One might refer to one such negative facet of social media as “cyber violence”.

Randa Zarnoufi of the FSR Mohammed V University in Rabat, Morocco, and colleagues suggest that the number of victims of this new form of hostility is growing day by day and is having a strongly detrimental effect on the psychological wellbeing of too many people. A perspective that has been little investigated in this area with regard to reducing the level of cyber violence in the world is to consider the psychological status and the emotional dimension of the perpetrators themselves. New understanding of what drives those people to commit heinous acts against others in the online world may improve our response to it and open up new ways to address the problem at its source rather than attempting to simply filter, censor, or protect victims directly.

The team has analysed social media updates using Ensemble Machine Learning and the Plutchik wheel of basic emotions to extract the character of those updates in the context of cyber violence, bullying and trolling behaviour. The analysis draws the perhaps obvious, but nevertheless highly meaningful, conclusion that there is a significant association between an individual’s emotional state and the personal propensity to harmful intent in the realm of social media. Importantly, the work shows how this emotional state can be detected and perhaps the perpetrator of cyber violence be approached with a view to improving their emotional state and reducing the negative impact their emotions would otherwise have on the people with whom they engage online.

This is very much the first step in this approach to addressing the serious and growing problem of cyber violence. The team adds that they will train their system to detect specific issues in social media updates that are associated with harassment with respect to sexuality, appearance, intellectual capacity, and political persuasion.

Zarnoufi, R., Boutbi, M. and Abik, M. (2020) ‘AI to prevent cyber-violence: harmful behaviour detection in social media’, Int. J. High Performance Systems Architecture, Vol. 9, No. 4, pp.182–191

No comments: