If you have a voice assistant in your home or on your phone, have you ever been concerned that someone from the company could listen to your voice recordings?
Recent news coverage confirms that suspicion.
At the end of July, The Guardian reported that people at Apple were regularly listening to recordings of deeply personal events such as conversations with doctors, sexual encounters, and other moments. While the effort was designed as a quality control measure, users likely had no idea that some of their utterances were being recorded and reviewed by humans.
Since then, Apple has temporarily suspended its human review program. Google has been forced to pause its own review program in the EU and Amazon is now giving users the ability to opt-out.
Mozilla has put together a guide for you to change your privacy settings on voice assistants.
The dating app Tinder has faced increasing scrutiny over abusive interactions on the service. In November 2019, an Auckland man was convicted of murdering British woman Grace Millane after they met on Tinder. Incidents such as these have brought attention to the potential for serious violence facilitated by dating apps.
The US version of the app added a panic button which alerts law enforcement to provide emergency assistance, in partnership with the safety app Noonlight. There is also a photo verification feature that will allow users to verify images they upload to their profiles, in an effort to prevent catfishing.
“Does This Bother You?” is another new feature, which automatically detects offensive messages in the app’s instant messaging service, and asks the user whether they’d like to report it. Finally, a Safety Center will give users a more visible space to see resources and tools that can keep them safe on the app.
The dating app Tinder has faced increasing scrutiny over abusive interactions on the service. In November 2019, an Auckland man was convicted of murdering British woman Grace Millane after they met on Tinder. Incidents such as these have brought attention to the potential for serious violence facilitated by dating apps.Amid ongoing pressure to better protect its users, Tinder recently unveiled some new safety features.The US version of the app added a panic button which alerts law enforcement to provide emergency assistance, in partnership with the safety app Noonlight. There is also a photo verification feature that will allow users to verify images they upload to their profiles, in an effort to prevent catfishing.“Does This Bother You?” is another new feature, which automatically detects offensive messages in the app’s instant messaging service, and asks the user whether they’d like to report it. Finally, a Safety Center will give users a more visible space to see resources and tools that can keep them safe on the app.
Millions of Amazon customers are at risk of being duped by unscrupulous sellers gaming the Amazon’s Choice endorsement, new research from Which? reveals.
Our exclusive investigation shows the Amazon’s Choice badge recommends potentially poor quality products that appear to have been artificially boosted by incentivised and fake reviews.
We believe Amazon’s recommendation system is inherently flawed and easily gamed by unscrupulous sellers, despite evidence suggesting that many consumers trust the Amazon’s Choice badge as a mark of quality.
Video: why you shouldn’t trust Amazon’s Choice
We ask members of the public how they view Amazon’s Choice, and reveal the key details of our investigation.
Is Amazon’s Choice a ‘mark of quality’?
Shoppers use Amazon’s Choice to help them make decisions. New Which? research found four in ten (44%) Amazon customers – people who have been on the website in the last six months and have spotted an Amazon’s Choice logo – believe it means a product has been quality checked by Amazon, and a third (35%) believe it means it has been checked for safety.
And when people notice the logo, 45% of shoppers said they were more likely to purchase a product from Amazon with the badge than without.
Amazon’s Choice suspicious reviews
Which? looked at five popular product categories on Amazon.co.uk and found dozens of Amazon’s Choice-recommended products with the hallmarks of suspicious reviews.
Amazon’s Choice recommends ‘highly rated, well-priced products available to dispatch immediately’ and it is trusted by nearly half of all Amazon shoppers.
But our investigation suggests online sellers are secretly incentivising customers to leave five star reviews – potentially boosting their rankings on the website and making them more likely to be attributed the Amazon’s Choice badge.
Amazon’s Choice and unknown brands
We looked at the top 50 bestselling items on Amazon.co.uk in five popular product categories – dash cameras, action cameras, headphones/earphones, surveillance video equipment and smart watches.
While we found the Amazon’s Choice recommendation used for household names such as Apple, Panasonic and Sony; we also found it commonly used to recommend unknown brands brands – those our experts had never heard of outside of Amazon’s listings. This happened in nearly two thirds (63%) of cases.
Amazon’s ‘best-sellers’ list for five popular tech categories
The chart below shows the proportion of known and unknown brand products on the Amazon top 50 best-sellers page for five popular tech categories.
In nearly a quarter (23%) of cases, these unknown tech brands with Amazon’s Choice recommendations didn’t even appear to have a website. Not only surprising for an electronics website in 2020, but potentially leaves customers with limited or no product support if they have issues.
Products with evidence of incentivised reviews
All of the suspicious or fake review activity we found in the course of this investigation was on product listings from unknown brands.
Among these ‘bestselling’ Amazon’s Choice products we found:
- The AKASO EK7000 4K Sport action camera. It had 3,968 reviews – more than any other of the top 50 action cameras on Amazon – and an average rating of 4.5 out of 5 stars. But several people claimed they had been incentivised to write good reviews, with one sharing a photo of a leaflet offering free accessories in exchange for reviews underneath a big image of five stars.
One said: “The camera quality is awful on every setting, as is the audio. In the package came a leaflet… it explains that if you leave a five star review you can get free accessories. So the chances are that most of these five star reviews are not genuine. Don’t be fooled.”
Another said: “The reason for the high review ratings is because they offer you free accessories if you leave a favourable review! So how can you trust the reviews to be sincere and genuine?… The whole thing stinks! Disappointed in Amazon.”
- The Victure 1080P FHD WiFi IP Camera baby monitor. Again this jointly had the most reviews for any of the top 50 bestselling surveillance video equipment products on Amazon and an average 4.4 star rating.
One reviewer criticised its wi-fi connection, playback and viewing capabilities. He said: “After seeing my review [someone] from Victure contacted me directly via email (ie outside of the Amazon messaging system) and asked me to change to five stars in exchange for a new free camera. I declined.”
Another added he was sent a card offering a new camera for a five-star review and wrote: “They know their product is faulty already so they lure customers to write good reviews and rate them five stars.”
- An ANCwear fitness tracker with an average 4.2 star rating. One reviewer actually posted a photo of the card used to offer the incentive, and wrote: “Don’t believe the five star reviews, the watch looks and feels very cheap… only reason it is getting good reviews is the £15 bribe.”
Other suspicious products
It wasn’t just Amazon’s bestselling items where we found Amazon’s Choice logos used to promote products that appeared to be suspicious. During the course of the investigation, each of the following items had an Amazon’s Choice logo:
A 2TB USB flash drive for just £18. Legitimate flash drives this size are rare, and cost over £1,000. Multiple users commented that the drive didn’t work or was a fake.
A pair of AMYEA wireless headphones with close to 2,000 reviews, the majority of which were about completely different products, including Acne cream, a ceiling light shade, prescription goggles and even razor blades.
A security camera by Elite Security, which we reported to Amazon in November 2019 after it failed our security tests, was still listed as an Amazon’s Choice product.
Amazon removes Amazon’s Choice badges
Amazon told us it had removed the Amazon’s Choice logo from a number of products and taken action against some sellers following our investigation.
A spokesman said: “We know that customer trust is hard to earn and easy to lose, so we strive to protect customer trust in products Amazon’s Choice highlights. We don’t tolerate Amazon policy violations, such as review abuse, incentivised reviews, counterfeits or unsafe products. When deciding to badge a product as Amazon’s Choice, we proactively incorporate a number of factors that are designed to protect customers from those policy violations. When we identify a product that may not meet our high bar for products we highlight for customers, we remove the badge.”
He added that Amazon used advanced technology, coupled with regular human audits, to make sure Amazon’s Choice products were a high standard.
ANCWear told Which?: “Only those who are satisfied with our products and are willing to leave feedback will [get] a coupon.” None of the other brands responded when we asked them for comment.
Amazon “must be more transparent”
Which? believes that Amazon must carefully scrutinise the use of its Amazon’s Choice branding to ensure that it is an effective tool for recommending products. It must also be more transparent about how such endorsements are attributed, so that consumers can make informed decisions on purchases that don’t involve assumptions or guesswork.
Which? is calling on the CMA to investigate the way in which fake reviews and endorsements awarded by online platforms are potentially misleading people. Sign our petition to stop fake reviews to demand action. We’re also interested in your stories. If you’ve seen evidence of fake or incentivised reviews, or if you have any other experience with endorsements on other websites then email firstname.lastname@example.org.
Do you trust Amazon’s Choice recommendations? Have your say now.
*Which? surveyed 2,042 GB adults between 21 and 22 January 2020; 896 had seen the Amazon’s Choice logo on a visit to the website in the previous six months. Fieldwork was carried out online by YouGov and data have been weighted to be representative of the GB population (aged 18+).
Facebook on Tuesday unveiled more details about the likely workings of a new independent that’ll oversee content-moderation decisions, outlining a new appeals process users would go through to request an additional review of takedowns. Users of Facebook and its Instagram photo service can ask the board to review their case after appealing to the social network first. You’ll have 15 days to fill out a form on the board’s website after Facebook’s decision.
Requesting a review by the board doesn’t mean the body will automatically hear your case. You’ll have to explain why you think Facebook made the wrong call and why the board should weigh in. You’ll also have to spell out why you posted the content and explain why Facebook’s decision could affect other users.
Facebook will also get to refer cases to the board. Users will receive a notice explaining whether the board decided to review a given case. The company explained this process in proposed bylaws that still need to be approved by the board.
The creation of the independent content-moderation board could help clarify how Facebook decides what posts it leaves up or pulls down and lead to policy changes. The additional appeals process might also help Facebook fend off critics, some of whom have alleged the company censorsand other groups. Facebook also faces criticism for allowing politicians to spread false information in .
“This board we’re creating is a means to hold ourselves accountable and provide for oversight on whether or not we’re making decisions that are principled according to the set of standards and values that we’ve set out,” Brent Harris, Facebook’s director of governance and global affairs, said during a conference call Tuesday.
The social network has rules about barring content, including hate speech, nudity and human trafficking. But users sometimes disagree with how those rules are applied. Facebook has reversed some of its decisions in the past, especially amid public scrutiny. In 2016, the company removed an iconic Vietnam War photo of a girl fleeing a napalm attack because it violated the social network’s rules on nudity. It reinstated the image amid an outcry, citing the image’s historical importance.
Facebook users typically receive a notification that includes an option to appeal when the company removes their content. If the appeal isn’t successful, users will now be able to ask the new board to review their case. Facebook will be required to reinstate removed content If the board sides with the user.
Users who submit an appeal will receive a reference identification number if their content is eligible for review by the board. Eligible content includes Facebook and Instagram posts, videos, photos and comments that the company took down. The process will eventually be expanded to groups, ads, events and other content, including information rated “false” by fact-checkers and content left up on the platform. Facebook didn’t specify when this would happen.
Facebook expects the board to make a decision and for the company to take action on the ruling in roughly 90 days.
Harris said he expects the board to initially review dozens of cases every year but noted that the decisions could impact Facebook’s 2.4 billion users, especially if the social network ends up changing its policies.
Users will also be able to choose if they want to include details that could identify them in the board’s final decision. On Tuesday, Ranking Digital Rights, a nonprofit that promotes freedom of expression and privacy, called on Facebook to provide more clarity, including how it’ll protect the privacy of users who don’t consent to releasing identifiable information. The board’s decision will be published on its website if approved for release.
Fay Johnson, a Facebook product manager who focuses on transparency and oversight, said the company is trying to make it clear to users that the board’s decisions will be public. “There really will be a value added to what the board speaks to, even if the specific information about the person who’s posting the content is not included in the draft decision,” she said.
Facebook also named Thomas Hughes, former executive director at Article 19, a nonprofit focused on freedom of expression and digital rights, to lead the board’s administrative staff. When Hughes led Article 19 in 2018, he called on Facebook to be more transparent about the content it removed and improve the appeals process for users.
“This is, as it goes without saying, an enormous undertaking and it will take us a few months before we are ready,” Hughes said.
The board is expected to be made up of 40 members and will likely start hearing cases this summer. Facebook announced in 2018 its plans to create a content oversight board
An Avast antivirus subsidiary sells ‘Every search. Every click. Every buy. On every site.’ Its clients have included Home Depot, Google, Microsoft, Pepsi, and McKinsey.
When you buy a product on Amazon, there’s little guarantee that what you’re getting has been expertly vetted for safety. The Wall Street Journal reported this year that more than 4,000 banned, unsafe, and mislabeled products were on the company’s platform, ranging from faulty motorcycle helmets to magnetic toys labeled as choking hazards.
Those faulty products have resulted in serious, sometimes fatal, injuries, setting loose a tidal wave of liability claims. According to court records viewed by The Verge, Amazon has faced more than 60 federal lawsuits over product liability in the past decade. The suits are a grim catalog of disaster: some allege that hoverboards purchased through the company burned down properties. A vape pen purchased through the company exploded in a pocket, according to another suit, leaving a 17-year-old with severe burns.
The list goes on: an allegedly faulty ladder bought on Amazon is blamed for a death. Two days after Christmas in 2014, a fire started at a Wyoming home, blamed on holiday lights purchased through the company. Firefighters found a man inside, facedown and unconscious, according to court filings. He died that night. The results of the suits have been mixed: Amazon has settled some cases, and successfully defended itself in others, depending on the circumstances. (The company declined to comment for this article.)
Throughout the cases, Amazon has taken advantage of its unusual legal status as half-platform, half-store. If Home Depot sells a defective bandsaw, the store can be sued alongside the company that made the product. That liability means conventional retailers have to be careful about the products they stock, making sure every item on store shelves has passed at least the most basic product safety requirements. States have passed different versions of product liability laws, but they all put the burden of fault on more than just the original manufacturer.
But Amazon is more complex: it acts as a direct seller of products, while also providing a platform, called Marketplace, for third parties to sell their products. Tightly integrated into Amazon’s own sales, Marketplace products are often cheaper for consumers, less controlled, and sometimes less reliable than other products — and because Amazon is usually seen as a platform for those sales rather than a seller, the company has far less liability for anything that goes wrong. But because the Marketplace is so intertwined with Amazon’s main “retail” store, it’s easy for customers to miss the difference.
Kaylen Ward’s Twitter fundraiser for the Australian bushfire relief has ended. The Los Angeles-based model said she raised $1 million (by comparison Jeff Bezos donated $690,000). At the start of Ms. Ward’s successful donation drive she had three Instagram accounts — none of which were part of the campaign.
Despite that, Instagram kicked her off all three accounts, saying her behavior on Twitter violated Instagram’s sexually suggestive content guidelines. On Twitter, Ms. Ward — as The Naked Philanthropist — offered a privately-sent nude photo to those who provided verifiable proof of donation to organizations including Australian Red Cross and The Koala Hospital. Her fundraiser complied with Twitter’s Terms of Service.
If the thought of companies stalking you online and denying you services because they think you’re a sinner gives you the Orwell Anti-Sex League chills, you should know that Airbnb just asked Instagram to hold its beer.
The same day Ms. Ward launched her fundraising campaign, reports emerged detailing Airbnb’s new “trait analyzer” algorithms that compile data dossiers on users, decides whether you’ve been bad or good, gives you a score, and then “flag and investigate suspicious activity before it happens.”
The Evening Standard reported on Airbnb’s patent for AI that crawls and scrapes everything it can find on you, “including social media for traits such as ‘conscientiousness and openness’ against the usual credit and identity checks and what it describes as ‘secure third-party databases’.”
They added, “Traits such as “neuroticism and involvement in crimes” and “narcissism, Machiavellianism, or psychopathy” are “perceived as untrustworthy.” Further:
Commonwealth legislation should not only be published in words but in machine-readable code, which would allow it to be read not only by lawyers but also computers, a move CSIRO suggests will boost the adoption of new regulatory technology across the economy, improving compliance while reducing costs.
CSIRO detailed its vision for “rules as code” in a submission to the Senate select committee on financial and regulatory technology, calling for the government to think more about ‘legal informatics’, or ‘computational law’, to allow computers to help automate compliance. This would “reduce the cost of red tape and improve the quality of risk management in society,” the science agency said.
“The goal is that computer-assisted reasoning using these logics should give the same answers as judges and lawyers doing legal reasoning about the black-letter law,” CSIRO said. “When legal texts can be represented in this way, it enables the potential to build digital tools to help people to interact with the law.”
The banking industry – which has faced soaring compliance costs in the wake of the Hayne royal commission – has been wary about adopting new technologies in compliance. This has been due to the complexity of regulation, a reluctance by regulators to endorse a specific technological approach, and heavy sanctions for failures. This was evidenced by AUSTRAC’s legal actions for anti-money laundering failures at Commonwealth Bank and Westpac, which both related to failures in technology systems.
The big banks want Treasury to encourage the financial sector regulators ASIC, APRA and the Reserve Bank, “to make regtech a viable proposition in the financial services sector”.
Many start-ups, along with more established technology vendors, are developing new systems to help banks meet legal duties, including establishing the identity and background of customers, ensuring legal compliance, verification of income and expenses, and data privacy. Juniper Research expects global spending on regtech to rise from $US25 billion ($37 billion) 2019 to $US127 billion by 2024.
‘Rules as code’
CSIRO, which operates a digital innovation arm known as Data61, has detailed to the committee how a “rules as code” approach could lift compliance with various laws.
It is working with PwC on a joint venture called PaidRight to check employees’ entitlements under enterprise bargaining agreements against what they have actually been paid.
The banking industry and CSIRO are working on a project to develop a digital approach to organising climate change disclosure, which CSIRO said could be “a first step towards a nationally coordinated framework for delivery of climate information”.
The agency is also working with the building and construction industry to automatically check Computer Aided Design (CAD) models of buildings against the many building and construction regulations from the federal and state governments.
Publishing machine-interpretable rules alongside the text of legislation would “provide critical support for the regtech industry and potentially significant productivity benefits for regulated industries in Australia,” CSIRO said.
The Australian Banking Association called on the committee, which is being chaired by Liberal Senator Andrew Bragg, to recommend that Treasury “be explicitly tasked with responsibility for a growth strategy for regtech”.
Design box thinking
The RegTech Association, which represents 110 start-ups and corporates, suggested the committee call for the creation of a COAG-style forum to introduce government departments to regtech, and is encouraging government to become an “influencer, buyer, beneficiary and investor” in the space.
In its submission, the association suggests a percentage of regulatory fines paid by banks could be invested in a new ‘patient capital’ investment fund to invest in the sector, modelled on the Australian Medical Research Future Fund. It also reckons a safe harbour, or relief program, could be created to provide reporting entities attempting to deploy regtech with more confidence to adopt changes, via amendments to ASIC’s regulatory guidance.
The association also wants government to create “design box” or “sandbox” programs to accelerate testing of new technologies. It pointed to the APIX Platform, part of the ASEAN Financial Innovation Network and backed by the Monetary Authority of Singapore, which has created a marketplace for financial institutions to exchange ideas with fintechs on better ways of doing things.
“Australia could easily replicate this idea of a digital marketplace or partner to introduce a similar platform,” the association said.
“It could allow buyers and sellers to come together to experiment more easily, allow greater visibility over regtech solutions, help regtechs understand the current problem statements of their potential clients, and allow a ‘design box’ where negative assurance could be provided by regulators as observers. Over time the digital marketplace could also be a portal for talent and skill recruitment.”
Separately, CSIRO responded to an accusation in the submission by FinTech Australia to the inquiry which criticised Data61 for “competing directly with private enterprise for government and non-government work”. In a statement, CSIRO said it “does not seek to compete with the private sector or start-ups and where possible aims to partner with Australian organisations, to support their growth.
“Like many of CSIRO’s business units, projects for Data61, the digital innovation arm of CSIRO, are typically identified as a result of discussions with, or approaches from government, industry or academic partners, where an opportunity has been identified for our research to be applied to solve problems and create benefits for Australia.”
But unregulated markets for goods have been shown not to work; and it turns out that unregulated markets for ideas don’t either.
Global digital platforms are conquering the world and rely critically on digital infrastructures to function, yet little research has explored the fundamental interrelationship between the two. This working paper argues that understanding centralization and decentralization in digital networks as asymmetry and symmetry in mutual interdependencies between the constitutive elements of a digital network can help us understand the platform-infrastructure relationship more fundamentally (and vice versa). To this end, the paper proposes, as a starting point, the in-depth analytical and literature study of blockchain networks as a particularly revealing type of digital platform/infrastructure duality. The paper proposes an analytical model for characterizing de/centralization in digital networks and maps this onto blockchain networks. Based on this, the paper explores the de/centralization of blockchain, arguing that the extant blockchain literature largely has failed in providing a comprehensive understanding of de/centralization by not considering the complex second-order interdependencies between the different constitutive dimensions of a blockchain: the symbolic, technological and political dimension. Based on this, the paper provides an analysis of the meaning of de/centralization in blockchain networks by studying the interdependencies between its constitutive elements of coin, network technology, and social community.