In Nuñez’s eyes, Facebook is not a trustworthy interlocutor. “The company seems to be pretty comfortable with obfuscating the truth, and that’s why people don’t trust Facebook anymore,” he says. “They’ve had the chance to be honest and transparent plenty of times, and time and time again, you see that the company has been misleading either by choice or by willful ignorance.”
Select Committee on Democracy and Digital Technologies
Digital Technology and the Resurrection of Trust
Report of Session 2019-21 – published 29 June 2020 – HL Paper 77
e-governance solutions are most successful in small countries, with a young population, high trust in institutions, and a historical need for technological renewal.In fact, the successful e-governance model of Estonia relies on transparency and accountability: most user data are openly available to government institutions, while citizens can follow up every single request of their data and have the right to demand clear justifications for the usage.
A team drawn from the BBC’s Technology Strategy and Architecture and Research & Development departments is now working, with a range of external technology and media partners, on a way of indelibly ‘marking’ content at the point it is published so that it can be identified wherever it ends up in that vast ecosystem we call the Internet.
Further detection techniques which would show where ‘marked’ content has been manipulated could then be added into the process. The idea is that these signals would be readable by both machine, so that automated actions can be taken to flag or even remove suspect content and by humans, journalists and our audiences. We’ve called this work, which is still at an early stage, ‘Project Origin’.
The technology needed to make it work is complex and multi-faceted, drawing on techniques such as watermarking, hashing and fingerprinting. A key challenge is that any signal needs to be robust enough to survive the many non-malicious things that can happen to a piece of content such as compression, resizing and so on.
Other issues include editorial considerations such as deciding which content to mark:
- If we only mark potentially ’sensitive’ content does this create problems when a ’signal’ cannot be found in other content, leaving it less valued because it is seen not to be genuine?
- What will marking content do to our workflows in terms of added effort and complexity?
The project the BBC is doing in this area sits alongside other work we and others are doing in the disinformation space. Examples are:
- the range of strong editorial content we are creating about the dangers of disinformation such as the ‘Beyond Fake News’ strand
- the work we are doing on media literacy and the partnerships we are building to collaborate with other media and technology organisations.
The BBC is a member of the global partnership on AI’s media integrity steering group which last year launched the Deepfake Detection Challenge with Facebook, AWS and others.
The Project Origin team currently aims to test its first solutions sometime this summer, building on these as we develop and strengthen our partnerships in this area. The eventual ambition is a system which is simple to use, transparent and with open standards that can be widely adopted for public good.
Amazon.com Inc said it is implementing a one-year moratorium on police use of its facial recognition software, a reversal of its longtime defence of law enforcement’s use of the technology.
The tech giant is the latest to step back from law-enforcement use of systems that have faced criticism for incorrectly identifying people with darker skin. The Seattle-based company did not say why it took action now.
Public trust in the UK government as a source of accurate information about the coronavirus has collapsed in recent weeks, suggesting ministers may struggle to maintain lockdown restrictions in the aftermath of the Dominic Cummings affair.According to surveys conducted on behalf of the University of Oxford’s Reuters Institute by YouGov, less than half of Britons now trust the Westminster government to provide correct information on the pandemic – down from more than two-thirds of the public in mid-April.
Local news stations across the U.S. aired a segment produced and scripted by Amazon which touts the company’s role in delivering essential groceries and cleaning products during the COVID-19 pandemic, and its ability to do so while “keeping its employees safe and healthy.”
The segment, which was aired by at least 11 local TV stations, and which was introduced with a script written by Amazon and recited verbatim by news anchors, presents a fawning picture of Amazon, which has struggled to deliver essential items during the pandemic, support the sellers that rely on its platform, and provide its workers with the necessary protective equipment. Each anchor introduces the script then throws to an Amazon-produced look “inside” an Amazon fulfillment center, which is narrated by Amazon spokesperson Todd Walker:
Rebekah Jones said in an email to CBS12 News that her removal was “not voluntary” and that she was removed from her position because she was ordered to censor some data, but refused to “manually change data to drum up support for the plan to reopen.”
The researchers also found anti-vaccination communities offer more diverse narratives around vaccines and other established health treatments—promoting safety concerns, conspiracy theories or individual choice, for example—that can appeal to more of Facebook’s approximately 3 billion users, thus increasing the chances of influencing individuals in undecided communities. Pro-vaccination communities, on the other hand, mostly offered monothematic messaging typically focused on the established public health benefits of vaccinations. The GW researchers noted that individuals in these undecided communities, far from being passive bystanders, were actively engaging with vaccine content.
“We thought we would see major public health entities and state-run health departments at the center of this online battle, but we found the opposite. They were fighting off to one side, in the wrong place,” Dr. Johnson said.
As scientists around the world scramble to develop an effective COVID-19 vaccine, the spread of health disinformation and misinformation has important public health implications, especially on social media, which often serves as an amplifier and information equalizer. In their study, the GW researchers proposed several different strategies to fight against online disinformation, including influencing the heterogeneity of individual communities to delay onset and decrease their growth and manipulating the links between communities in order to prevent the spread of negative views.
“Instead of playing whack-a-mole with a global network of communities that consume and produce (mis)information, public health agencies, social media platforms and governments can use a map like ours and an entirely new set of strategies to identify where the largest theaters of online activity are and engage and neutralize those communities peddling in misinformation so harmful to the public,” Dr. Johnson said.