The researchers also found anti-vaccination communities offer more diverse narratives around vaccines and other established health treatments—promoting safety concerns, conspiracy theories or individual choice, for example—that can appeal to more of Facebook’s approximately 3 billion users, thus increasing the chances of influencing individuals in undecided communities. Pro-vaccination communities, on the other hand, mostly offered monothematic messaging typically focused on the established public health benefits of vaccinations. The GW researchers noted that individuals in these undecided communities, far from being passive bystanders, were actively engaging with vaccine content.
“We thought we would see major public health entities and state-run health departments at the center of this online battle, but we found the opposite. They were fighting off to one side, in the wrong place,” Dr. Johnson said.
As scientists around the world scramble to develop an effective COVID-19 vaccine, the spread of health disinformation and misinformation has important public health implications, especially on social media, which often serves as an amplifier and information equalizer. In their study, the GW researchers proposed several different strategies to fight against online disinformation, including influencing the heterogeneity of individual communities to delay onset and decrease their growth and manipulating the links between communities in order to prevent the spread of negative views.
“Instead of playing whack-a-mole with a global network of communities that consume and produce (mis)information, public health agencies, social media platforms and governments can use a map like ours and an entirely new set of strategies to identify where the largest theaters of online activity are and engage and neutralize those communities peddling in misinformation so harmful to the public,” Dr. Johnson said.
Amazon says it’s working hard on preventing price gouging and reports having removed nearly 4,000 accounts from the marketplace. “Sellers set their own product prices in our store and we have policies to help ensure sellers are pricing their products competitively,” an Amazon spokesperson told The Markup in an email. “We actively monitor our store and remove offers that violate our policies. We have implemented additional measures to keep prices low and our global teams are working 24/7 to monitor prices in our store.”
Based on our groundbreaking 2018 research on the public’s digital attitudes and understanding, we ran a nationally representative survey just before lockdown and focus groups shortly after it began, benchmarking the public’s appetite, understanding and tolerance towards the impacts of tech on their lives.
This year’s research finds people continue to feel the internet is better for them as individuals than for society as a whole. 81% say the internet has made life a lot or a little better for ‘people like me’ while 58% say it has had a very positive or fairly positive impact on society overall.
Our most recent Dialogues & Debates, on April 28, was titled “Trustworthy Technology in the Time of a Pandemic” and featured Berlin-based Mozilla Fellow Frederike Kaltheuner. As a Mozilla Fellow, Frederike is examining applications of AI systems that classify, judge, and evaluate people’s identities, feelings, and emotions, in order to uncover where and how such technology is currently deployed. Before joining Mozilla, Frederike was a director at Privacy International in London, where she led the organisation’s strategic work on corporate surveillance and emerging technology.
During the virtual Dialogues & Debates, Frederike answered questions on Twitter submitted by our community. Questions and answers addressed contact tracing, whether privacy and public health are a zero sum game, whether face masks thwart facial recognition tech, and far more.
Watch the recording below, and find the full transcript of the event here. Learn more about Dialogues & Debates here.
House lawmakers leading an antitrust investigation into Amazon demanded Friday that CEO Jeff Bezos testify about the company’s alleged practice of gleaning financial information from third-party sellers to bolster its own private label business.
Naomi Oreskes, a professor of the history of science at Harvard, has focussed much of her career on examining distrust of science in the United States. In 2010, she and the historian Erik M. Conway published “Merchants of Doubt,” which examined the ways in which politics and big business have helped sow doubt about the scientific consensus. Her most recent book, “Why Trust Science?,” examines how our idea of the scientific method has changed over time, and how different societies went about verifying its accuracy. Her work often addresses climate change and why Americans have rejected climate-change science more than people in other countries have.
The coronavirus pandemic meets Israel at a moment of deep constitutional crises. The current government suffers from fundamental distrust among citizens, and lacks legitimacy after failing to regain power after three election cycles.
Restoring trust in a time of emergency is essential for overcoming national crises. Restoring trust in our social contract requires compliance with the rule of law. Securing fundamental rights is therefore not a luxury in time of crisis. It is a must for successfully winning the fight against the virus. It is also a must for ensuring we wake up in a free society at the other end of the crisis.
Facebook said this week it would be sending all of its contracted human moderators home. The company cannot offer remote working for its moderation staff owing to privacy considerations over the material they handle, and so its moderation work will be done exclusively by permanent employees for the foreseeable future.
Facebook says the absence of human moderators was not related to the spam filter error and it believes it is well prepared for moderating the site with a vastly reduced human workforce.
Kang-Xing Jin, Facebook’s head of health, said: “We believe the investments we’ve made over the past three years have prepared us for this situation. With fewer people available for human review, we’ll continue to prioritise imminent harm and increase our reliance on proactive detection in other areas to remove violating content. We don’t expect this to impact people using our platform in any noticeable way.”
Facebook is not the only technology firm to have sent home its moderators. YouTube announced on Monday that it would be relying more on AI to moderate videos in the future. Unlike Facebook, the video site did not commit to the change being invisible to users. Instead, it said more videos would be taken down as a result of the lack of human oversight.
Normally, YouTube videos are flagged by an AI and then sent to a human reviewer to confirm they should be taken down. But now videos will far more frequently be removed on the say of an AI alone. The company says it will not be giving creators a permanent black mark, or “strike”, if their videos are taken down without human review, since it accepts that it will inevitably end up taking down “some videos that may not violate policies”.