The rule of law in the time of coronavirus outbreak | Internet Policy Review

The coronavirus pandemic meets Israel at a moment of deep constitutional crises. The current government suffers from fundamental distrust among citizens, and lacks legitimacy after failing to regain power after three election cycles.

Restoring trust in a time of emergency is essential for overcoming national crises. Restoring trust in our social contract requires compliance with the rule of law. Securing fundamental rights is therefore not a luxury in time of crisis. It is a must for successfully winning the fight against the virus. It is also a must for ensuring we wake up in a free society at the other end of the crisis.

Source: The rule of law in the time of coronavirus outbreak | Internet Policy Review

Facebook says spam filter mayhem not related to coronavirus | Technology | The Guardian

Facebook said this week it would be sending all of its contracted human moderators home. The company cannot offer remote working for its moderation staff owing to privacy considerations over the material they handle, and so its moderation work will be done exclusively by permanent employees for the foreseeable future.

Facebook says the absence of human moderators was not related to the spam filter error and it believes it is well prepared for moderating the site with a vastly reduced human workforce.

Kang-Xing Jin, Facebook’s head of health, said: “We believe the investments we’ve made over the past three years have prepared us for this situation. With fewer people available for human review, we’ll continue to prioritise imminent harm and increase our reliance on proactive detection in other areas to remove violating content. We don’t expect this to impact people using our platform in any noticeable way.”

Facebook is not the only technology firm to have sent home its moderators. YouTube announced on Monday that it would be relying more on AI to moderate videos in the future. Unlike Facebook, the video site did not commit to the change being invisible to users. Instead, it said more videos would be taken down as a result of the lack of human oversight.

Normally, YouTube videos are flagged by an AI and then sent to a human reviewer to confirm they should be taken down. But now videos will far more frequently be removed on the say of an AI alone. The company says it will not be giving creators a permanent black mark, or “strike”, if their videos are taken down without human review, since it accepts that it will inevitably end up taking down “some videos that may not violate policies”.

Source: Facebook says spam filter mayhem not related to coronavirus | Technology | The Guardian

Coronavirus books plagiarized from news outlets dominate Amazon search results

At first glance, Richard J. Baily’s book, “Coronavirus: Everything You Need to Know About the Wuhan Corona Virus and How to Prevent It,” appears to be an authoritative deep dive on how to prepare for the pandemic.

The book was the top “coronavirus” search result on Amazon for many users Tuesday — and not just in Amazon’s books section. The guidebook appeared before Clorox wipes and hand sanitizer, let alone any book written by a doctor or public health specialist. The first pages are filled with a well-written, useful primer called “What is Coronavirus?”

The book, however, isn’t what it appears to be. Each of the book’s chapters were directly plagiarized from other parts of the web. The first two chapters were lifted verbatim from NBC News stories by Erika Edwards and Sara Miller published in late January. The third chapter, which is dedicated to cleaning tips, was ripped from the website for Nancy’s Cleaning Services, a housekeeping company based in California. The remainder of the book is plagiarized verbatim from articles on ChinaLawBlog.com and The Guardian, according to a copy seen by NBC News.

Other books that appeared high on Amazon search results had similar issues, as well as authors whose identities could not be verified. The listings highlight the challenge that Amazon has faced as consumers have turned to the company for a variety of goods and information related to the new coronavirus.

Amazon has taken action to limit price gouging and removed more than 1 million products for making misleading claims.

An Amazon spokesperson said the company uses a combination of automated and human moderation tools to find and remove content that violates its guidelines.

“Amazon maintains content guidelines for the books it sells, and we continue to evaluate our catalog, listening to customer feedback. We have always required sellers, authors and publishers to provide accurate information on product detail pages, and we remove those that violate our policies,” an Amazon spokesperson said.

“In addition, at the top of relevant search results pages we are linking to CDC advice where customers can learn more about the virus and protective measures.”

Plagiarized books by nonexistent authors clogging the top of Amazon search results have become a familiar problem to real, independent authors who try to sell books on Amazon. Fake e-books, frequently plagiarized from original works, often crowd out authentic books and products at the top of Amazon pages.

During a mass health emergency, the plagiarized e-books could be filled with unreliable information, and drown out higher ranking products that could better safeguard the public. But Nate Hoffelder, the founder and editor of the e-reader site The Digital Reader, said similar scams have been going on at Amazon for years.

“Most of Amazon’s e-book competitors have a content approval process in place that keeps the worst content out,” Hoffelder said. “After 10 years of watching the internet grow and change, I have come to the conclusion that the platform is the issue. Internet companies try to run huge automated systems with little human oversight, and scammers can take advantage of the algorithms.”

Other platforms, like YouTube, have taken steps to prohibit users from profiting from coronavirus content. While creators are still allowed to upload coronavirus videos to YouTube, running ads along with them was disallowed, though the company said Wednesday it would begin allowing ads for some partners and creators.

Hoffelder said that the first time he remembered fake books being uploaded to Amazon’s store was in 2015, when users were creating short books and putting them on Amazon’s Kindle Unlimited subscription platform. Scammers were paid every time one of the books was loaned out using the program. Each of the plagiarized coronavirus books NBC News noticed in the first page of search results were eligible for a loan using Kindle Unlimited.

“I don’t know if it’s a case of can’t or won’t,” Hoffelder said. “What I do know is that Amazon is the only retailer with this problem. They have the lion’s share of the e-book market revenue, and could easily afford to use the same quality assurance processes as their competitors, and yet they still have this problem.”

Baily and Wang are not the only pseudonymous or nonexistent authors to take over the first page of results about the coronavirus on Amazon. “Dr. Kashif Saeed’s” book shares an almost identical description to Baily’s, except for several grammatical errors. “Coronavirus is that the word that’s one everyone’s lips immediately,“ the typo-ridden first line of the book’s description reads. “Saeed’s” other books alternately claim he is a veterinarian and a social media marketing and SEO expert.

Other books, like “Ramond Moroe’s” 38-page e-book “Sheild (sic) Against Corona-Virus: The Best Guide Ever (2020)” are littered with typos when they’re not directly plagiarizing news articles.

“In this book you will found a natural way to stay safe, all you needs to do is ..CLICK add to cart,” the description reads. The book frequently appears in the first page of search results when users query “coronavirus.” It was the eighth-ranked product on a sitewide Amazon search for “coronavirus” at publication time.

Some of the plagiarized books are filled with unintelligible 5-star reviews, which can help boost their rankings in search results.

“This book extremely extraordinary, in the wake of perusing this book I am so intrigued,” reads one review of Baily’s top-ranked e-book. “On account of the writer and would prescribed for this book to anybody. Many thanks to the author for giving us such a beautiful book.”

Hoffelder said scammers always appear to be “one step ahead of Amazon’s changing rules.”

“It seems like there’s a new con every other year or so, and that every time Amazon swats one, the cheaters invent a new one,” he said. “The problem with scammers in the Kindle Store was the Kindle Store.”

Source: Coronavirus books plagiarized from news outlets dominate Amazon search results

The Follower Factory – The New York Times

All these accounts belong to customers of an obscure American company named Devumi that has collected millions of dollars in a shadowy global marketplace for social media fraud. Devumi sells Twitter followers and retweets to celebrities, businesses and anyone who wants to appear more popular or exert influence online. Drawing on an estimated stock of at least 3.5 million automated accounts, each sold many times over, the company has provided customers with more than 200 million Twitter followers, a New York Times investigation found.

Source: The Follower Factory – The New York Times

The Reputation Society – Hassan Masum

The Reputation Society (MIT Press, 2012) is a collection of essays discussing the benefits and risks of online reputation. It focuses on asking the right questions today, so that reputation is better used in society tomorrow. Expert contributors offer perspectives ranging from philanthropy and open access to science and law. The 18 chapters are divided into 6 thematic parts. (The Table of Contents, sample chapters, and reviews are on the Reputation Society MIT Press web page.)

Source: The Reputation Society – Hassan Masum

Facebook is rating the trustworthiness of its users on a scale from zero to 1 – The Washington Post

SAN FRANCISCO — Facebook has begun to assign its users a reputation score, predicting their trustworthiness on a scale from zero to 1.

The previously unreported ratings system, which Facebook has developed over the past year, shows that the fight against the gaming of tech systems has evolved to include measuring the credibility of users to help identify malicious actors.

Facebook developed its reputation assessments as part of its effort against fake news, Tessa Lyons, the product manager who is in charge of fighting misinformation, said in an interview. The company, like others in tech, has long relied on its users to report problematic content — but as Facebook has given people more options, some users began falsely reporting items as untrue, a new twist on information warfare for which it had to account.

It’s “not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they’re intentionally trying to target a particular publisher,” Lyons said.

A user’s trustworthiness score isn’t meant to be an absolute indicator of a person’s credibility, Lyons said, nor is there is a single unified reputation score that users are assigned. Rather, the score is one measurement among thousands of new behavioral clues that Facebook now takes into account as it seeks to understand risk. Facebook is also monitoring which users have a propensity to flag content published by others as problematic and which publishers are considered trustworthy by users.

It is unclear what other criteria Facebook measures to determine a user’s score, whether all users have a score and in what ways the scores are used.

The reputation assessments come as Silicon Valley, faced with Russian interference, fake news and ideological actors who abuse the company’s policies, is recalibrating its approach to risk — and is finding untested, algorithmically driven ways to understand who poses a threat. Twitter, for example, now factors in the behavior of other accounts in a person’s network as a risk factor in judging whether a person’s tweets should be spread.

Source: Facebook is rating the trustworthiness of its users on a scale from zero to 1 – The Washington Post

Hacks and spying – Is WhatsApp safe for diplomats?

WhatsApp has become the standard tool for international negotiations. As early as 2016, the Guardian talked about the „rise and rise of diplomacy by WhatsApp“.

WhatsApp’s popularity with diplomats comes from the fact that it is encrypted and has a large base of users, says Corneliu Bjola. „Almost everyone has a WhatsApp account“, he notes.

Bjola teaches Diplomatic Studies at the University of Oxford. He also advises officials on digital diplomacy. Bjola says WhatsApp is used in multilateral settings such as the UN, as well as within foreign ministries.

Tricky security questions

Yet high-profile hacking cases and apparent security flaws have raised uncomfortable questions about the app.

WhatsApp’s popularity among diplomats could take a serious hit after the Cryptoleaks scandal. Investigative journalists revealed that German and US intelligence used faulty encryption to spy on allies across the globe.

US spying has caused trouble for WhatsApp’s parent company Facebook since revelations by whistle-blower Edward Snowden about the NSA in 2013.

The widespread use of surveillance casts doubts on whether free services by US firms can guarantee adequate protection for their users.

Europe has to ask itself – is WhatsApp safe enough for its diplomats?

Source: Hacks and spying – Is WhatsApp safe for diplomats?

Mediated trust

One of the new research threads of the Lab is technology mediated trust. It concerns the following simple questions:

  • how do we use technologies to produce trust and mitigate distrust in interpersonal and institutional contexts?
  • can we trust these trust technologies?

These questions are laid out in more details in a paper currently under review at New Media and Society. The draft version is available here: Mediated Trust – A Theoretical Framework to Address the Trustworthiness of Technological Trust Mediators


There is also a talk version. See and download the slides below.

Read file



Mozilla Foundation – How to opt out of human review of your voice assistant recordings

If you have a voice assistant in your home or on your phone, have you ever been concerned that someone from the company could listen to your voice recordings?

Recent news coverage confirms that suspicion.

At the end of July, The Guardian reported that people at Apple were regularly listening to recordings of deeply personal events such as conversations with doctors, sexual encounters, and other moments. While the effort was designed as a quality control measure, users likely had no idea that some of their utterances were being recorded and reviewed by humans.

Since then, Apple has temporarily suspended its human review program. Google has been forced to pause its own review program in the EU and Amazon is now giving users the ability to opt-out.

Mozilla has put together a guide for you to change your privacy settings on voice assistants.

Source: Mozilla Foundation – How to opt out of human review of your voice assistant recordings

Tinder’s new safety features won’t prevent all types of abuse

The dating app Tinder has faced increasing scrutiny over abusive interactions on the service. In November 2019, an Auckland man was convicted of murdering British woman Grace Millane after they met on Tinder. Incidents such as these have brought attention to the potential for serious violence facilitated by dating apps.

Amid ongoing pressure to better protect its users, Tinder recently unveiled some new safety features.

The US version of the app added a panic button which alerts law enforcement to provide emergency assistance, in partnership with the safety app Noonlight. There is also a photo verification feature that will allow users to verify images they upload to their profiles, in an effort to prevent catfishing.

“Does This Bother You?” is another new feature, which automatically detects offensive messages in the app’s instant messaging service, and asks the user whether they’d like to report it. Finally, a Safety Center will give users a more visible space to see resources and tools that can keep them safe on the app.

The dating app Tinder has faced increasing scrutiny over abusive interactions on the service. In November 2019, an Auckland man was convicted of murdering British woman Grace Millane after they met on Tinder. Incidents such as these have brought attention to the potential for serious violence facilitated by dating apps.Amid ongoing pressure to better protect its users, Tinder recently unveiled some new safety features.The US version of the app added a panic button which alerts law enforcement to provide emergency assistance, in partnership with the safety app Noonlight. There is also a photo verification feature that will allow users to verify images they upload to their profiles, in an effort to prevent catfishing.“Does This Bother You?” is another new feature, which automatically detects offensive messages in the app’s instant messaging service, and asks the user whether they’d like to report it. Finally, a Safety Center will give users a more visible space to see resources and tools that can keep them safe on the app.

Source: Tinder’s new safety features won’t prevent all types of abuse