Artificial Intelligence and Trust in Digital Media
Multiplication of information sources, automatic aggregation of content, fake news: although the amount of information is increasing, its reliability is in question. The EPFL+ECAL Lab in collaboration with IDIAP is publishing a study on 3 September at the INTERACT Conference that discusses how to restore trust among online readers. A major stake for the future of democracy and a key factor for the success of the media!
According to a 2019 Reuters report, search engines, social media and content aggregators are the primary source of information for more than half of the population, on a global scale. This ever-growing trend increases the risk of spreading fake news and disinformation campaigns.
Recent research in computer science and the evolution of journalistic practices makes it possible to provide indicators on the reliability of content and the origin of its sources. Transparency is also becoming essential in the face of the possible biases of artificial intelligence, used in the aggregation of content. But is this enough to strengthen the credibility of the information with the public? Beyond the reliability of sources, trust takes precedence. The World News Congress has defined it as the number one success factor for media. However, trust is often subjective and develops thanks to the readers’ perception.
Among the first studies on the subject, the work of the EPFL+ECAL Lab, the design research centre of EPFL, has served to identify indicators of confidence for digital media, to imagine ways of designing them visually and to assess their actual impact on potential users. The study was conducted with IDIAP’s Social Computing Group to include Artificial Intelligence, now widely used to combine different sources of content, into this work. In collaboration with the Social Computing Group, an experimental device has been set up. It made it possible to test specific methods for combining content.
Tests conducted with more than 200 people not only revealed key factors for increasing trust in the media, but also generated new knowledge to understand user perception. The results show in particular that the development of algorithms to detect, tag or report fake news is not enough to restore confidence, when they are presented in a technical way. To give them impact, one needs to translate them through design, according to different strategies, for example by using them to organize information, or by giving them specific visual forms.
On 3 September 2021, the project “Trust over Time” is being published as part of the 18th edition of the International Conference dedicated to human-computer interaction INTERACT, in Bari, Italy. It shows that trust in information, essential to any democracy, requires an approach to the pursuit of innovation that brings together technology, digital design, perceptual science and professional practice. The project was carried out in cooperation with the Swiss broadcasting company RTS and with the support of the Initiative for Media Innovation.
MAS in Design Research for Digital Innovation
Delphine Ribes, Yves Kalberer
Hardware & Firmware Engineering
Dr Cédric Duchêne
Dr Andreas Sonderegger
IMI – Innovation Media Group
University of Fribourg
RTS – Swiss broadcasting company