Post-Truth Democracy 2030

2020

A speculative distributed ledger platform that uses a psychologically driven UI to regulate fake news and prevent echo chambers in 2030.

Speculative Design, Algorithmic Design, Behavioural Design, UI/UX

Team
Tomas Knaze
Esther Maltby
Ella Cope
Jordan Kotler
SupervisorS
Dr Sam Cooper
Dr Freddie Page
Additional Advisor
Professor Robert Shorten
Through misinformation and targeted advertising, the volatility of political discourse on social media has tested democracy and the integrity of truth within online spaces. These changes have amplified the echo chamber effect, compromising the ability of citizens to make rational and informed decisions when voting (the basic principle of democracy). This project consists of a research report that explores the complex political, psychological, and technological factors that will affect participatory democracy in the UK by 2030 and a speculative design solution that uses a distributed ledger platform to authenticate information and encourage the informed exploration of alternative perspectives. Moreover, the project explores the recent influence and the future trajectory of memes and XR in shaping political discourse.
Design Process
The future of democracy depends on having informed voters. Therefore, the electorate needs to be aware of the validity of the information they receive in the media. Through design research, we developed a speculative online aggregate news platform that uses a decentralised algorithm to assess the validity of digital content.

Like social media, anyone can submit content to be published on the platform. However, each publisher has an associated 'credibility score', that evolves and is affected by the authentication results of their previous submissions. This score influences the number of users who consume the publisher's content.

Authentication is executed by randomly selecting a parked car and running an open-source algorithm on its internal computer. Cars operated as an expensive computer protects the system from Sybil attacks, while encryption ensures biases do not influence the authentication. The algorithm identifies information that can be proven true or false and opinions that cannot be proven either way. These classifications slowly fade over time to account for new contradictory information that may emerge and show that the authentication was most valid at the time of publishing.

When users interact with content about a topic that is gaining significant attention, they are presented with an Opinion Map. It combines all content about the associated issue, assessing them using Natural Language Processing and visually grouping them by their opinion. The user's autonomy to explore information in this map functions as a softer way to break echo chambers than binary fact-checking.

Read the research report

See the portfolio
Previous Project
Next Project