Post-Truth Democracy 2030 — Speculating the unique harms to democratic processes from technological developments

Speculative Design, Design Research

Post-Truth Democracy explores the complex political, psychological, and technological factors that will affect participatory democracy in the UK by 2030, and proposes a distributed ledger platform to authenticate information and encourage the informed exploration of plural perspectives.

Through misinformation and targeted advertising, the volatility of political discourse on social media has tested democracy and the integrity of truth within online spaces. These changes have amplified the echo chamber effect, compromising the ability of citizens to make rational and informed decisions when voting (the basic principle of democracy). The future of democracy depends on having informed voters. Therefore, the electorate needs to be aware of the validity of the information they receive in the media.
Through speculative design research, we identified key technological trends and current cultural phenomenons that would likely be misused by bad actors and that could therefore threaten the integrity of democracy over the next 10 years. From this forecasting research, we developed design fictions that explored alternate future pathways and converged on ideas that could solve likely developments.
The solution developed consists of a speculative online aggregate news platform that uses a decentralised algorithm to assess the validity of digital content. Like social media, anyone can submit content to be published on the platform. However, each publisher has an associated 'credibility score', that evolves and is affected by the authentication results of their previous submissions. This score influences the number of users who consume the publisher's content.
Year
2020
Supervisors
Prof Robert Shorten
Dr Sam Cooper
Dr Freddie Page
Contributions
Speculative Design
Forecasting Research

Future technology is political

The fabrication of information is not a new concept. Historically, leaders have used propaganda to slander opponents and influence public perception, with documented examples dating back to the Roman Empire. However, only in its contemporary form as fake news, has it been provided with channels to disseminate rapidly.The purpose of democratic systems is to allow citizens to exercise their power to direct the course of government. However, as technology and society advances, new challenges will emerge, threatening the effectiveness of democracy and raising the question: are current systems still fit for purpose?
In order to ascertain the key problems to address, broader contextual research was conducted on current and preferable states of technology and society to understand their impact. Past and present case studies of democracies and issues affecting them were investigated to identify potential trends that will influence the future development of democratic systems. A scope wheel was used to conduct foresight analysis, which provides a graphical representation of the probable and possible future realities. This was used to inform ideation by predicting the timeline of penetration of significant technological, social, economic and environmental factors. To explore the scope of the projected future, narratives were constructed around three personas of different lifestyles, cultures, and socio-economic backgrounds. Their routines encapsulate how developing technology, changing political climates and the rapidly growing digital world could affect everyday lives within future democratic systems. The following extracts from the research report outline these narratives.

Forecasting potential threats

Our research showed that the increasing prevalence of five technological trends (5G, IoT, Mixed Reality, Blockchain, and Emotion Recognition), three cultural trends (increasing social media usage, targeted advertising, and virality), and the interplay between them all will present novel and challenging threats to future democracies.
We identified the increasing ramifications of the intrinsic link between people’s personal and digital selves. Technologies such as emotion recognition, mixed reality, and the Internet of Things (made more accessible through the spread of 5G internet) will likely create a digital footprint of each individual’s actions and the large amounts of data for which they are accountable for. New datasources will be exploited to collect user data, further compounded in ad personalisation from metrics such as eye movement, facial expressions , sleep patterns or smart fridge contents. To protect against the third-party usage of one's data people may be forced into protecting their data for a fee, exacerbating inequality levels as targeted advertisement can further manipulate users from lower-income backgrounds who use free and advert infiltrated services.
We also observed signals that economic and social divides within society may be heightened. As resources become more scarce due to climate change and are not readily available to everybody, people may act irrationally and become self-interested instead of upholding collective values. It could be suggested that forms of blocking could soon extend to separating users from those with opposing views to minimise conflict, and therefore minimise sources of information users have access to and increase information segregation. Anonymity may also cost a premium leading to further segregation of groups from contrasting economic backgrounds, likely increasing the influential power of those with profiling data over those who cannot afford privacy.

Deepfakes

The vast quantities of data available coupled with emerging mixed reality technologies will likely increase the prevalence of deepfakes specifically intended for political misuse. Once a deepfake is spread and watched by masses of people, misperception will likely be formed. Even if the video is subsequently debunked, it is very difficult to target entrenched opinions. Videos of political leaders could cause riots and destabilise financial markets long before they are exposed as being fake - with even suspicion alone being able to cause self-replicating volatility.

Memes as revolutionary political tools

The phenomenon of virality has become more prominent in the digital age as more people can quickly access information through online platforms. Viral content tends to evoke a mix of strong emotions from the reader such as surprise, fear, or anger. Apart from the core information, techniques that are used to evoke these emotions involve emotive language, bold colour, font, graphic and shocking imagery to help to get views, and thus, engagement.‍
Looking further into the relationship between visual culture and viral content, the visual identity of content contains two things: something it denotes (literally means) and something it connotes (means subconsciously). The language of visual culture has evolved extremely quickly and faced a rapid change in the digital age. Memes, as described by Dawkins in his book ‘The Selfish Gene’ are ‘small cultural units of transmission which are spread by copying or imitation’. By taking parts of images out of context, placing them in new visual environments, and subverting them to new meanings, memes become a highly emotive hybrid of an image. Moreover, they also have a humorous and satirical tone which instantly diffuses the intensity of polarised political arguments, (an inherent viral property), and widespread appeal. Memes can easily become revolutionary political tools.
A recent study on the diffusion of misinformation on social media explains how the extreme techniques designed for virality affects the dissemination of misinformation. It explains that compared to facts, which tends to be static and reliant on the content itself for virality, misinformation tends to be dynamic and recirculates as it is shared either organically or with bad intent. This results in a stark difference between the reach of the two types of information, with facts typically experiencing only single spikes in virality and misinformation experiencing multiple peaks due to its tendency to evolve.

Misinformation as a costly endeavor

The platform proposed promotes the truth through positive reinforcements, as opposed to highlighting false information, which can function as a counteractive polarising force. On the platform, authentication takes place when a content is submitted by a publisher. Authentication of content is executed by randomly selecting a parked car and running an open-source algorithm on its internal computer. Cars operated as an expensive computer protects the system from Sybil attacks, while encryption ensures biases do not influence the authentication.
On a user’s feed, content is categorised into three groups. If content contains no false information then it is presented with a tick. If there a degree of misinformation, then the platform highlights statements that can objectively be validated as either true or false. And if opinions are published, then the platform identifies them as such so they are not misconstrued as objective facts. These three classifications of statements are highlighted to a user when they are viewing content, so they are aware of the validity of what they are reading. These classifications will slowly fade over time, to account for new contradictory information that may emerge after the fact, and to show that the authentication was most valid at the time of publishing.
A user has a personalised feed based on their topical preferences, presented alongside trending content and breaking stories. Content containing no misinformation will appear higher on a user’s feed than those with unauthenticated content. This arrangement of a user’s feed is optimised to prevent the virality of misinformation. If a content contains misinformation, a factual post based on the same subject is presented alongside it, to ensure that the truth is promulgated over misinformation. To ensure users are not confined to a closed loop of information, the platform attempts to  populate a user’s feed with a few ‘random’ stories from topical areas that the user does not regularly engage with. When users interact with content about a topic that is gaining significant attention, they are presented with an Opinion Map. It combines all content about the associated issue, assessing them using Natural Language Processing and visually grouping them by their opinion. The user's autonomy to explore information in this map functions as a softer way to break echo chambers than binary fact-checking.