Publications – Trust in Digital Life

Trust in Digital Life is a membership association comprising leading industry partners and knowledge institutes who exchange experience, share customer, market and technology insights, with the intention of improving the quality of trustworthy digital services and platforms available through joint research and development. Our vision is of a vibrant European Digital Single Market that benefits and can be trusted by both businesses and citizens. Find out more about what we do.


Trust in Digital Life Ecosystem letter

Source: Publications – Trust in Digital Life

Ulrich Beck Is the World’s Most Important Pandemic Intellectual

But, as Beck acknowledged, there were also at least two other possibilities. One was a retro politics of going back to the future. This would be a politics that aimed to restore the certainty of social development and the rule of organized politics and scientific reason that had guided the first modernity. The United States’ “war on terror” was one such attempt. It turned a 21st century security risk into a conventional war against Saddam Hussein’s regime in Iraq. It was a disaster. The most successful effort to control risk society within the framework of a classic industrial modernity is China. Its response to the COVID-19 crisis has put that on full display. COVID-19 was contained and CCP rule ensured by a full-bore mobilization of societal discipline, targeted deployment of medical spending, and state power, all of it clad in the guise of what the regime calls 21st-century Marxism, a self-confident narrative of modernization and progress. There is no room for questioning the modern epic of the China dream. The lack of a positive attitude is enough to trigger suspicion.

Source: Ulrich Beck Is the World’s Most Important Pandemic Intellectual

Spies, Lies, and Stonewalling: What It’s Like to Report on Facebook – Columbia Journalism Review

In Nuñez’s eyes, Facebook is not a trustworthy interlocutor. “The company seems to be pretty comfortable with obfuscating the truth, and that’s why people don’t trust Facebook anymore,” he says. “They’ve had the chance to be honest and transparent plenty of times, and time and time again, you see that the company has been misleading either by choice or by willful ignorance.”

House of Lords – Digital Technology and the Resurrection of Trust – Select Committee on Democracy and Digital Technologies

Select Committee on Democracy and Digital Technologies
Digital Technology and the Resurrection of Trust

Report of Session 2019-21 – published 29 June 2020 – HL Paper 77


Foreword by the Chair


Chapter 1: Introduction

Box 1: Definition of platforms

Figure 1: Examples of daily activity across social media platforms globally

The Committee’s work and acknowledgements

Chapter 2: Informed Citizens

Box 2: Definition of misinformation and disinformation

Misinformation and the media

Political advertising

Tackling misinformation and disinformation online

The role of fact checkers

Promoting good information

Communicating statistics

Making use of parliamentary expertise

Public interest journalism

Chapter 3: Accountability

Accountability and the technology platforms

The Online Harms agenda

Figure 2: Timeline of progress on the Online Harms White Paper

Freedom of expression in the online world

Platforms’ ultimate responsibility under a duty of care

Content moderation oversight

Appealing platforms’ decisions

Parliamentary oversight

Regulatory capacity

Box 3: The Regulators

Chapter 4: Transparency

Do platforms cause polarisation and degrade democratic discourse?

Targeted advertising

Foreign interference

Filter bubbles

Algorithmic design and outrage factories

Access for independent researchers

Algorithmic transparency

Box 4: How Google’s algorithms work

Algorithmic bias

Transparency in content moderation

Box 5: President Trump and content moderation study

Chapter 5: Inclusive debate across society

The role of technology in tackling the challenges facing democracy

Supporting technological innovation in democracy

Online voting

Technology as a tool, but not a panacea for problems facing democracy

A democratic information hub

How Government and Parliament could better use digital tools

Chapter 6: Free and fair elections

Box 6: Definition of campaigner

Electoral law

Figure 3: Reported spending by campaigners on digital advertising as a percentage of total advertising spend

Figure 4: Timeline of electoral developments throughout modern British history



Box 7: Imprints

Electoral Commission powers

Outside the formal investigation period

Campaigners’ receipts


Oversight powers

Small donations and spending

Advert databases

Box 8: Mozilla Guidelines for Effective Advert Archives

Campaigners’ use of personal data

Chapter 7: Active digital citizens

Political literacy

Digital skills and digital media literacy

Box 9: Definition of digital media literacy

Table 1: Digital Media Literacy and Digital Skills in the Curriculum

Lessons from abroad

Table 2: Digital pedagogy in Estonia and Finland

Who has responsibility for digital media literacy?

Teaching digital media literacy

Box 10: JCQ Statistics on take-up of computing GCSE and A-level

Making social media companies understandable to the public

Anonymity as a barrier to understanding content on the internet

Source: House of Lords – Digital Technology and the Resurrection of Trust – Select Committee on Democracy and Digital Technologies

OII | Beyond Contact-Tracing Apps – How Trust Shapes E-Governance — Oxford Internet Institute

e-governance solutions are most successful in small countries, with a young population, high trust in institutions, and a historical need for technological renewal.In fact, the successful e-governance model of Estonia relies on transparency and accountability: most user data are openly available to government institutions, while citizens can follow up every single request of their data and have the right to demand clear justifications for the usage.

Source: OII | Beyond Contact-Tracing Apps – How Trust Shapes E-Governance — Oxford Internet Institute

Project Origin: Securing trust in a complex media landscape – BBC Academy

A team drawn from the BBC’s Technology Strategy and Architecture and Research & Development departments is now working, with a range of external technology and media partners, on a way of indelibly ‘marking’ content at the point it is published so that it can be identified wherever it ends up in that vast ecosystem we call the Internet.

Further detection techniques which would show where ‘marked’ content has been manipulated could then be added into the process. The idea is that these signals would be readable by both machine, so that automated actions can be taken to flag or even remove suspect content and by humans, journalists and our audiences. We’ve called this work, which is still at an early stage, ‘Project Origin’.

The technology needed to make it work is complex and multi-faceted, drawing on techniques such as watermarking, hashing and fingerprinting. A key challenge is that any signal needs to be robust enough to survive the many non-malicious things that can happen to a piece of content such as compression, resizing and so on.

Other issues include editorial considerations such as deciding which content to mark:

  • If we only mark potentially ’sensitive’ content does this create problems when a ’signal’ cannot be found in other content, leaving it less valued because it is seen not to be genuine?
  • What will marking content do to our workflows in terms of added effort and complexity?

The project the BBC is doing in this area sits alongside other work we and others are doing in the disinformation space. Examples are:

  • the range of strong editorial content we are creating about the dangers of disinformation such as the ‘Beyond Fake News’ strand
  • the work we are doing on media literacy and the partnerships we are building to collaborate with other media and technology organisations.

The BBC is a member of the global partnership on AI’s media integrity steering group which last year launched the Deepfake Detection Challenge with Facebook, AWS and others.

The Project Origin team currently aims to test its first solutions sometime this summer, building on these as we develop and strengthen our partnerships in this area. The eventual ambition is a system which is simple to use, transparent and with open standards that can be widely adopted for public good.

Source: Project Origin: Securing trust in a complex media landscape – BBC Academy

500 Estonian Crypto Companies Lose Permits After $220B Scandal

Estonia, one of the European Union’s most crypto-friendly countries, is cracking down on hundreds of licensed crypto companies in response to a $220 billion money laundering scandal, according to Bloomberg. Estonia was among the first EU countries to license crypto companies but has been forced to clamp down after hundreds of billions of dollars of dirty money was detected in the Estonian unit of Denmark’s largest lender Danske Bank A/S. It’s put the country at the center of Europe’s biggest money laundering scandal.

Source: 500 Estonian Crypto Companies Lose Permits After $220B Scandal

Amazon pauses police use of its facial recognition software | News | Al Jazeera Inc said it is implementing a one-year moratorium on police use of its facial recognition software, a reversal of its longtime defence of law enforcement’s use of the technology.

The tech giant is the latest to step back from law-enforcement use of systems that have faced criticism for incorrectly identifying people with darker skin. The Seattle-based company did not say why it took action now.

Source: Amazon pauses police use of its facial recognition software | News | Al Jazeera