2.6 C
New York
Friday, November 22, 2024
HomeNews‘Trust’ and ‘distrust’ buttons could reduce social media misinformation: Study

‘Trust’ and ‘distrust’ buttons could reduce social media misinformation: Study

Date:

Related stories

In a recent experimental study conducted by researchers from UCL, the inclusion of ‘trust’ and ‘distrust’ buttons, alongside the traditional ‘like’ buttons, on social media platforms has been proposed as a potential strategy to mitigate the dissemination of misinformation.

According to the findings published in eLife, the introduction of incentives to promote accuracy resulted in a 50 per cent reduction in the reach of false posts.

Professor Tali Sharot, co-lead author of the study and affiliated with UCL Psychology & Language Sciences, Max Planck UCL Centre for Computational Psychiatry and Ageing Research, and Massachusetts Institute of Technology said, “Over the past few years, the spread of misinformation, or ‘fake news’, has skyrocketed, contributing to the polarisation of the political sphere and affecting people’s beliefs on anything from vaccine safety to climate change to tolerance of diversity. Existing ways to combat this, such as flagging inaccurate posts, have had limited impact.

“Part of why misinformation spreads so readily are that users are rewarded with ‘likes’ and ‘shares’ for popular posts, but without much incentive to share only what’s true.

“Here, we have designed a simple way to incentivise trustworthiness, which we found led to a large reduction in the amount of misinformation being shared.”

- Advertisement -

In a recent publication in Cognition, Professor Sharot and her colleagues investigated the influence of repetition on the sharing of information on social media.

They discovered that individuals were more inclined to share statements that they had encountered before, perceiving repeated information as more likely to be accurate.

This finding underscores the powerful impact of misinformation through the mechanism of repetition.

To explore potential solutions, the researchers conducted a series of six experiments involving 951 participants, utilising a simulated social media platform.

The platform allowed users to share news articles, half of which contained inaccuracies. Participants had the option to react to these articles not only with traditional ‘like’ or ‘dislike’ responses and reposting, but also with novel ‘trust’ or ‘distrust’ reactions in certain experimental conditions.

The aim was to assess the effectiveness of these additional reaction options in combating the spread of misinformation.

In a noteworthy discovery, the researchers observed that the inclusion of the trust/distrust buttons proved to be popular among users, with individuals utilising these buttons more frequently than the traditional like/dislike buttons.

Moreover, the incentive structure associated with the trust/distrust buttons was effective, as users began sharing a higher proportion of true information in an attempt to elicit ‘trust’ reactions from others.

Further analysis utilising computational modeling techniques shed light on an important outcome: following the introduction of trust/distrust reactions, participants demonstrated increased attentiveness towards the perceived reliability of news stories before deciding whether to repost them.

In addition to these findings, the researchers also discovered that participants who had engaged with the versions of the platform featuring trust/distrust buttons ended up with more accurate beliefs compared to those who did not have access to these reaction options.

Co-lead author, PhD student Laura Globig (UCL Psychology & Language Sciences, Max Planck UCL Centre for Computational Psychiatry and Ageing Research, and Massachusetts Institute of Technology) said, “Buttons indicating the trustworthiness of information could easily be incorporated into existing social media platforms, and our findings suggest they could be worthwhile to reduce the spread of misinformation without reducing user engagement.

“While it’s difficult to predict how this would play out in the real world with a wider range of influences, given the grave risks of online misinformation, this could be a valuable addition to ongoing efforts to combat misinformation.”

(ANI)

Subscribe

- Never miss a story with notifications

- Gain full access to our premium content

- Browse free from up to 5 devices at once

Latest stories