Kaylen Ward’s Twitter fundraiser for the Australian bushfire relief has ended. The Los Angeles-based model said she raised $1 million (by comparison Jeff Bezos donated $690,000). At the start of Ms. Ward’s successful donation drive she had three Instagram accounts — none of which were part of the campaign.
Despite that, Instagram kicked her off all three accounts, saying her behavior on Twitter violated Instagram’s sexually suggestive content guidelines. On Twitter, Ms. Ward — as The Naked Philanthropist — offered a privately-sent nude photo to those who provided verifiable proof of donation to organizations including Australian Red Cross and The Koala Hospital. Her fundraiser complied with Twitter’s Terms of Service.
If the thought of companies stalking you online and denying you services because they think you’re a sinner gives you the Orwell Anti-Sex League chills, you should know that Airbnb just asked Instagram to hold its beer.
The same day Ms. Ward launched her fundraising campaign, reports emerged detailing Airbnb’s new “trait analyzer” algorithms that compile data dossiers on users, decides whether you’ve been bad or good, gives you a score, and then “flag and investigate suspicious activity before it happens.”
The Evening Standard reported on Airbnb’s patent for AI that crawls and scrapes everything it can find on you, “including social media for traits such as ‘conscientiousness and openness’ against the usual credit and identity checks and what it describes as ‘secure third-party databases’.”
They added, “Traits such as “neuroticism and involvement in crimes” and “narcissism, Machiavellianism, or psychopathy” are “perceived as untrustworthy.” Further:
To protect its hosts, Airbnb is now using an AI-powered tool to scan the internet for clues that a guest might not be a reliable customer. According to patent documents reviewed by the Evening Standard, the tool takes into account everything from a user’s criminal record to their social media posts to rate their likelihood of exhibiting “untrustworthy” traits — including narcissism, Machiavellianism, and even psychopathy. The background check tool is the work of Trooly, a startup Airbnb acquired in 2017. When the Evening Standard asked Airbnb to comment on the extent to which it uses Trooly’s tool, it declined. However, Airbnb’s website does note the company’s use of AI to rate potential guests: “Every Airbnb reservation is scored for risk before it’s confirmed. We use predictive analytics and machine learning to instantly evaluate hundreds of signals that help us flag and investigate suspicious activity before it happens.”
Uber faced a blow on Monday when London regulators refused to renew the ride-hailing company’s operating permit because of safety concerns. The biggest issue lawmakers cited was drivers using false identities as they ferried unsuspecting passengers.At least 14,000 trips were made by unauthorized drivers, according to city regulator Transport for London. The way it worked is this: A number of drivers would share one account, and whenever one of them went out to drive, they’d upload their own photo to fool passengers. The unauthorized drivers were able to pose as vetted, licensed and insured, when often they weren’t.
2019 CIGI-Ipsos Global Survey Highlights 1. Social media companies were second only to cyber criminals when it came to fueling online distrust.75% say social media companies are responsible for their online distrust In the 2019 survey, social media companies emerged as the leading source of user distrust in the internet — surpassed only by cybercriminals — with 75% of those surveyed citing Facebook, Twitter and other social media platforms as contributing to their lack of trust. People from Canada and Great Britain, at 89%, were the most likely to point to social media as a source of their distrust, followed by Nigeria (88%), the United States (87%) and Australia (83%). People from Japan (49%), Tunisia (60%), Hong Kong (63%) and Korea (64%) were the least likely to do so. Almost nine in ten (88%) North Americans who distrust the Internet cited social media as responsible for their distrust, the highest proportion out of all regions surveyed. While cybercriminals, cited by 81%, remained the leading source of internet distrust, a majority in all regions (62% globally) indicated that a lack of internet security was also a significant factor — up significantly from 48% in 2018. 2. More than half of those concerned about their online privacy say they’re more concerned than they were a year ago.53% are more concerned about their online privacy than they were a year ago Eight out of 10 (78%) people surveyed were concerned about their online privacy, with over half (53%) more concerned than they were a year ago, marking the fifth year in a row that a majority of those surveyed say they feel more concerned about their online privacy than the previous year. Fewer than half (48%) believe their government does enough to safeguard their online data and personal information, with the lowest confidence levels in North America (38%) and the G-8 countries (39%). Citizens around the world are increasingly viewing their own governments as a threat to their privacy online. In fact, more people attributed their online privacy concerns to domestic governments (66%) — a majority in nearly every region surveyed — than to foreign governments (61%). While 73% said they wanted their online data and personal information to be stored in their own country, majorities in Hong Kong (62%), Indonesia (58%), Egypt (58%), India (57%), Brazil (54%), and Mexico (51%) said they wanted their online data and personal information stored outside of their country. In contrast, only 23% of North Americans, 35% of Europeans and 32% of those in G-8 countries shared this sentiment. 3. A majority admit to falling for fake news at least once — citing Facebook as the leading source — and want both governments and social media companies to take action.86% have fallen for fake news at least once 86% said they had fallen for fake news at least once, with 44% saying they sometimes or frequently did. Only 14% said they had “never” been duped by fake news. Facebook was the most commonly cited source of fake news, with 77% of Facebook users saying they had personally seen fake news there, followed by 62% of Twitter users and 74% of social media users in general. 10% of Twitter users said they had closed their Twitter account in the past year as a direct result of fake news, while 9% of Facebook users reported doing the same. One-third (35%) pointed to the United States as the country most responsible for the disruptive effect of fake news in their country, trailed significantly by Russia (12%) and China (9%). Notably, internet users in Canada (59%), Turkey (59%) and the United States itself (57%) were most likely to say that the United States is most responsible for the disruptive effect of fake news in their own country, while users in Great Britain (40%) and Poland (35%) were most likely to point to Russia, and users in Hong Kong (39%), Japan (38%) and India (29%) were most likely to blame China. A majority of internet users around the globe support all efforts that governments and internet companies could take to combat fake news, from social media and video sharing platforms deleting fake news posts and videos (85%) and accounts (84%) to the adoption of automated approaches to content removal (79%) and government censorship of online content (61%). 4. Distrust in the internet is causing people to change the way they behave online.49% say their distrust has led them to disclose less personal information online Nearly half (49%) of those surveyed said their distrust had caused them to disclose less personal information online, while 43% reported taking greater care to secure their devices and 39% said they were using the internet more selectively, among other precautions. Conversely, only a small percentage of people reported making use of more sophisticated tools — such as using more encryption (19%) or using technical tools like Tor (The Onion Router) or virtual private networks (VPNs) — to protect themselves online. 5
When Prince Harry posted a photograph of himself and his future wife Meghan Markle in Botswana, placing a satellite collar on an elephant to track it and protect it from poachers, the royal was demonstrating new ways of combating the illegal wildlife trade.The couple’s post sought to highlight that more than 100 African elephants a day are killed for their ivory. Now, the war against poaching has another potential weapon: artificial intelligence.AI is capable of analysing different kinds of data sets and spotting significant patterns. The results can be used for the wider public good, such as improving planning in healthcare and public transport — or fighting wildlife poachers.“Audio data can be used to train algorithms to distinguish gunshots [of] those poaching wild animals [from] the gunshots [of] hunters,” says Chris Martin, a partner at law firm Pinsent Masons. Using big data, real-time alerts could be pinged to rangers to tell them which areas to focus on.Data trusts — which are separate legal entities designed to help organisations extract value from anonymised data without falling foul of privacy regulations — are being mooted as a way to allay concerns about how sensitive data is held by third parties.A pilot study on whether data trusts should be set up to share information to tackle the illegal wildlife trade was one of three initiatives by the Open Data Institute earlier this year (the ODI is a UK non-profit body that works with companies and governments “to build an open, trustworthy data ecosystem”).The study looked at whether data trusts could hold photographs from camera traps and acoustic information from a range of sources, which could be used by algorithms to create real-time alerts on poachers in protected areas.There are, however, legal questions about how to share anonymised data from governments and companies in a safe, ethical way against a backdrop of public mistrust. In the biggest scandal to date, consultancy Cambridge Analytica illicitly harvested personal data from Facebook to influence elections. In July, the US Federal Trade Commission approved a $5bn fine for the social media platform for privacy violations. Data trusts are being mooted as a way to allay concerns about how sensitive data is held by third partiesIn Los Angeles, residents have expressed concerns about the use of personal data collected from electric scooters, which is intended to help urban planning.Companies and governments tread a fine line between extracting information from data and ensuring they do not break laws such as the EU’s General Data Protection Regulation (GDPR) which forces any company holding personal data of an EU citizen to seek consent and delete the data on request. Individuals should not be identifiable from the data sets.These legal problems on privacy and governance were what law firm Pinsent Masons with BPE Solicitors had to contend with when advising the ODI on data trusts.The project to combat poaching looked at whether a data trust could improve the sharing of information and invoice data from researchers and governments, by monitoring documents given to border staff about species being transported across borders that can be falsified by smugglers. The data could be used to train algorithms to help border staff identify illegally traded animals.Mr Martin says setting up such a data trust could enable border officials to take photographs of a live animal and use software to check whether it is a species on which there are export restrictions.One advantage of a data trust is that it enables individuals to become trustees and have a say in how their anonymised data is used. It would allow citizens to be represented if the data trust held traffic information collected about their locality, for example.Data trusts might also encourage companies to put in data to enable them to work on projects where they have a common goal. “The big supermarkets could decide to set up a data trust to share data on, for example, tackling food waste or climate change,” Mr Martin says.Chris Reed, professor of electronic commerce at Queen Mary University of London, says data trusts are useful when multiple organisations put in data. “The sharing of data might have been subject to agreements between parties, but when you might have 100 companies putting in data you cannot have agreements covering them all. Having a data trust is a fair and safe way of doing this,” he says.Only a handful of data trusts exist. Credit card company Mastercard and IBM has formed an independent Dublin-based data trust called Truata. Connor Manning, a partner at law firm Arthur Cox, handled the corporate and trust structure documentation. He says that part of the legal complexity was designing the structure so that Mastercard was a beneficiary of the trust but the structure was not a standard company. “It is a corporate structure with a trust structure on top,” he explains.A data trust may not be the answer to every situation. Othe
Facebook is at pains to address such anxieties, but its initial outline of how the new currency will be used in Messenger and WhatsApp via Calibra is not exactly reassuring. At the very least, Facebook will know the people and companies with whom its users have financially interacted, but that is likely to only be the start. “Calibra will not share account information or financial data with Facebook, Inc or any third party without customer consent,” it says (my italics). Obviously, the company has past form here: the kind of forced consent that means you either allow Facebook to gobble up your data or don’t get access to many of its services. Whatever the guarantees, the most basic point is obvious enough: why should a company with such an appalling record on personal data be trusted to so massively extend its reach?
AbstractEmerging as a comprehensive and aggressive governance scheme in China, the “Social Credit System” (SCS) seeks to promote the norms of “trust” in the Chinese society by rewarding behavior that is considered “trust-keeping” and punishing those considered “trust-breaking.” This Article closely examines the evolving SCS regime and corrects myths and misunderstandings popularized in the international media. We identify four key mechanisms of the SCS, i.e., information gathering, information sharing, labeling, and joint sanctions, and highlight their unique characteristics as well as normative implications. In our view, the new governance mode underlying the SCS — what we call the “rule of trust” — relies on the fuzzy notion of “trust” and wide-ranging arbitrary and disproportionate punishments. It derogates from the notion of “governing the country in accordance with the law” enshrined in China’s Constitution.This Article contributes to legal scholarship by offering a distinctive critique of the perils of China’s SCS in terms of the party-state’s tightening social control and human rights violations. Further, we critically assess how the Chinese government uses information and communication technologies to facilitate data-gathering and data-sharing in the SCS with few meaningful legal constraints. The unbounded and uncertain notion of “trust” and the unrestrained employment of technology are a dangerous combination in the context of governance. We conclude with a caution that with considerable sophistication, the Chinese government is preparing a much more sweeping version of SCS reinforced by artificial intelligence tools such as facial-recognition and predictive policing. Those developments will further empower the government to enhance surveillance and perpetuate authoritarianism.Keywords: Social Credit, information and communications technologies, governance, social control, human rightsSuggested Citation:
These days, it’s not a shared drill that’s redefining trust and supplanting institutional intermediaries; it’s the blockchain. Botsman now says that the blockchain is the next step in shifting trust from institutions to strangers. “Even though most people barely know what the blockchain is, a decade or so from now, it will be like the internet,” she writes. “We’ll wonder how society ever functioned without it.”
The ambitious promises all sound very familiar.
The Trust & Technology Initiative brings together and drives forward interdisciplinary research from Cambridge and beyond to explore the dynamics of trust and distrust in relation to internet technologies, society and power; to better inform trustworthy design and governance of next generation tech at the research and development stage; and to promote informed, critical, and engaging voices supporting individuals, communities and institutions in light of technology’s increasing pervasiveness in societies.
Source: Trust & Technology Initiative
What blockchain does is shift some of the trust in people and institutions to trust in technology. You need to trust the cryptography, the protocols, the software, the computers and the network. And you need to trust them absolutely, because they’re often single points of failure.When that trust turns out to be misplaced, there is no recourse. If your bitcoin exchange gets hacked, you lose all of your money. If your bitcoin wallet gets hacked, you lose all of your money. If you forget your login credentials, you lose all of your money. If there’s a bug in the code of your smart contract, you lose all of your money. If someone successfully hacks the blockchain security, you lose all of your money. In many ways, trusting technology is harder than trusting people. Would you rather trust a human legal system or the details of some computer code you don’t have the expertise to audit?Blockchain enthusiasts point to more traditional forms of trust—bank processing fees, for example—as expensive. But blockchain trust is also costly; the cost is just hidden. For bitcoin, that’s the cost of the additional bitcoin mined, the transaction fees, and the enormous environmental waste.Blockchain doesn’t eliminate the need to trust human institutions. There will always be a big gap that can’t be addressed by technology alone. People still need to be in charge, and there is always a need for governance outside the system. This is obvious in the ongoing debate about changing the bitcoin block size, or in fixing the DAO attack against Etherium. There’s always a need to override the rules, and there’s always a need for the ability to make permanent rules changes. As long as hard forks are a possibility—that’s when the people in charge of a blockchain step outside the system to change it—people will need to be in charge.