But, as Beck acknowledged, there were also at least two other possibilities. One was a retro politics of going back to the future. This would be a politics that aimed to restore the certainty of social development and the rule of organized politics and scientific reason that had guided the first modernity. The United States’ “war on terror” was one such attempt. It turned a 21st century security risk into a conventional war against Saddam Hussein’s regime in Iraq. It was a disaster. The most successful effort to control risk society within the framework of a classic industrial modernity is China. Its response to the COVID-19 crisis has put that on full display. COVID-19 was contained and CCP rule ensured by a full-bore mobilization of societal discipline, targeted deployment of medical spending, and state power, all of it clad in the guise of what the regime calls 21st-century Marxism, a self-confident narrative of modernization and progress. There is no room for questioning the modern epic of the China dream. The lack of a positive attitude is enough to trigger suspicion.
In Nuñez’s eyes, Facebook is not a trustworthy interlocutor. “The company seems to be pretty comfortable with obfuscating the truth, and that’s why people don’t trust Facebook anymore,” he says. “They’ve had the chance to be honest and transparent plenty of times, and time and time again, you see that the company has been misleading either by choice or by willful ignorance.”
Select Committee on Democracy and Digital Technologies
Digital Technology and the Resurrection of Trust
Report of Session 2019-21 – published 29 June 2020 – HL Paper 77
e-governance solutions are most successful in small countries, with a young population, high trust in institutions, and a historical need for technological renewal.In fact, the successful e-governance model of Estonia relies on transparency and accountability: most user data are openly available to government institutions, while citizens can follow up every single request of their data and have the right to demand clear justifications for the usage.
A team drawn from the BBC’s Technology Strategy and Architecture and Research & Development departments is now working, with a range of external technology and media partners, on a way of indelibly ‘marking’ content at the point it is published so that it can be identified wherever it ends up in that vast ecosystem we call the Internet.
Further detection techniques which would show where ‘marked’ content has been manipulated could then be added into the process. The idea is that these signals would be readable by both machine, so that automated actions can be taken to flag or even remove suspect content and by humans, journalists and our audiences. We’ve called this work, which is still at an early stage, ‘Project Origin’.
The technology needed to make it work is complex and multi-faceted, drawing on techniques such as watermarking, hashing and fingerprinting. A key challenge is that any signal needs to be robust enough to survive the many non-malicious things that can happen to a piece of content such as compression, resizing and so on.
Other issues include editorial considerations such as deciding which content to mark:
- If we only mark potentially ’sensitive’ content does this create problems when a ’signal’ cannot be found in other content, leaving it less valued because it is seen not to be genuine?
- What will marking content do to our workflows in terms of added effort and complexity?
The project the BBC is doing in this area sits alongside other work we and others are doing in the disinformation space. Examples are:
- the range of strong editorial content we are creating about the dangers of disinformation such as the ‘Beyond Fake News’ strand
- the work we are doing on media literacy and the partnerships we are building to collaborate with other media and technology organisations.
The BBC is a member of the global partnership on AI’s media integrity steering group which last year launched the Deepfake Detection Challenge with Facebook, AWS and others.
The Project Origin team currently aims to test its first solutions sometime this summer, building on these as we develop and strengthen our partnerships in this area. The eventual ambition is a system which is simple to use, transparent and with open standards that can be widely adopted for public good.
Amazon.com Inc said it is implementing a one-year moratorium on police use of its facial recognition software, a reversal of its longtime defence of law enforcement’s use of the technology.
The tech giant is the latest to step back from law-enforcement use of systems that have faced criticism for incorrectly identifying people with darker skin. The Seattle-based company did not say why it took action now.
Public trust in the UK government as a source of accurate information about the coronavirus has collapsed in recent weeks, suggesting ministers may struggle to maintain lockdown restrictions in the aftermath of the Dominic Cummings affair.According to surveys conducted on behalf of the University of Oxford’s Reuters Institute by YouGov, less than half of Britons now trust the Westminster government to provide correct information on the pandemic – down from more than two-thirds of the public in mid-April.
Local news stations across the U.S. aired a segment produced and scripted by Amazon which touts the company’s role in delivering essential groceries and cleaning products during the COVID-19 pandemic, and its ability to do so while “keeping its employees safe and healthy.”
The segment, which was aired by at least 11 local TV stations, and which was introduced with a script written by Amazon and recited verbatim by news anchors, presents a fawning picture of Amazon, which has struggled to deliver essential items during the pandemic, support the sellers that rely on its platform, and provide its workers with the necessary protective equipment. Each anchor introduces the script then throws to an Amazon-produced look “inside” an Amazon fulfillment center, which is narrated by Amazon spokesperson Todd Walker:
Rebekah Jones said in an email to CBS12 News that her removal was “not voluntary” and that she was removed from her position because she was ordered to censor some data, but refused to “manually change data to drum up support for the plan to reopen.”