2019 CIGI-Ipsos Global Survey Highlights 1. Social media companies were second only to cyber criminals when it came to fueling online distrust.75% say social media companies are responsible for their online distrust In the 2019 survey, social media companies emerged as the leading source of user distrust in the internet — surpassed only by cybercriminals — with 75% of those surveyed citing Facebook, Twitter and other social media platforms as contributing to their lack of trust. People from Canada and Great Britain, at 89%, were the most likely to point to social media as a source of their distrust, followed by Nigeria (88%), the United States (87%) and Australia (83%). People from Japan (49%), Tunisia (60%), Hong Kong (63%) and Korea (64%) were the least likely to do so. Almost nine in ten (88%) North Americans who distrust the Internet cited social media as responsible for their distrust, the highest proportion out of all regions surveyed. While cybercriminals, cited by 81%, remained the leading source of internet distrust, a majority in all regions (62% globally) indicated that a lack of internet security was also a significant factor — up significantly from 48% in 2018. 2. More than half of those concerned about their online privacy say they’re more concerned than they were a year ago.53% are more concerned about their online privacy than they were a year ago Eight out of 10 (78%) people surveyed were concerned about their online privacy, with over half (53%) more concerned than they were a year ago, marking the fifth year in a row that a majority of those surveyed say they feel more concerned about their online privacy than the previous year. Fewer than half (48%) believe their government does enough to safeguard their online data and personal information, with the lowest confidence levels in North America (38%) and the G-8 countries (39%). Citizens around the world are increasingly viewing their own governments as a threat to their privacy online. In fact, more people attributed their online privacy concerns to domestic governments (66%) — a majority in nearly every region surveyed — than to foreign governments (61%). While 73% said they wanted their online data and personal information to be stored in their own country, majorities in Hong Kong (62%), Indonesia (58%), Egypt (58%), India (57%), Brazil (54%), and Mexico (51%) said they wanted their online data and personal information stored outside of their country. In contrast, only 23% of North Americans, 35% of Europeans and 32% of those in G-8 countries shared this sentiment. 3. A majority admit to falling for fake news at least once — citing Facebook as the leading source — and want both governments and social media companies to take action.86% have fallen for fake news at least once 86% said they had fallen for fake news at least once, with 44% saying they sometimes or frequently did. Only 14% said they had “never” been duped by fake news. Facebook was the most commonly cited source of fake news, with 77% of Facebook users saying they had personally seen fake news there, followed by 62% of Twitter users and 74% of social media users in general. 10% of Twitter users said they had closed their Twitter account in the past year as a direct result of fake news, while 9% of Facebook users reported doing the same. One-third (35%) pointed to the United States as the country most responsible for the disruptive effect of fake news in their country, trailed significantly by Russia (12%) and China (9%). Notably, internet users in Canada (59%), Turkey (59%) and the United States itself (57%) were most likely to say that the United States is most responsible for the disruptive effect of fake news in their own country, while users in Great Britain (40%) and Poland (35%) were most likely to point to Russia, and users in Hong Kong (39%), Japan (38%) and India (29%) were most likely to blame China. A majority of internet users around the globe support all efforts that governments and internet companies could take to combat fake news, from social media and video sharing platforms deleting fake news posts and videos (85%) and accounts (84%) to the adoption of automated approaches to content removal (79%) and government censorship of online content (61%). 4. Distrust in the internet is causing people to change the way they behave online.49% say their distrust has led them to disclose less personal information online Nearly half (49%) of those surveyed said their distrust had caused them to disclose less personal information online, while 43% reported taking greater care to secure their devices and 39% said they were using the internet more selectively, among other precautions. Conversely, only a small percentage of people reported making use of more sophisticated tools — such as using more encryption (19%) or using technical tools like Tor (The Onion Router) or virtual private networks (VPNs) — to protect themselves online. 5
When Prince Harry posted a photograph of himself and his future wife Meghan Markle in Botswana, placing a satellite collar on an elephant to track it and protect it from poachers, the royal was demonstrating new ways of combating the illegal wildlife trade.The couple’s post sought to highlight that more than 100 African elephants a day are killed for their ivory. Now, the war against poaching has another potential weapon: artificial intelligence.AI is capable of analysing different kinds of data sets and spotting significant patterns. The results can be used for the wider public good, such as improving planning in healthcare and public transport — or fighting wildlife poachers.“Audio data can be used to train algorithms to distinguish gunshots [of] those poaching wild animals [from] the gunshots [of] hunters,” says Chris Martin, a partner at law firm Pinsent Masons. Using big data, real-time alerts could be pinged to rangers to tell them which areas to focus on.Data trusts — which are separate legal entities designed to help organisations extract value from anonymised data without falling foul of privacy regulations — are being mooted as a way to allay concerns about how sensitive data is held by third parties.A pilot study on whether data trusts should be set up to share information to tackle the illegal wildlife trade was one of three initiatives by the Open Data Institute earlier this year (the ODI is a UK non-profit body that works with companies and governments “to build an open, trustworthy data ecosystem”).The study looked at whether data trusts could hold photographs from camera traps and acoustic information from a range of sources, which could be used by algorithms to create real-time alerts on poachers in protected areas.There are, however, legal questions about how to share anonymised data from governments and companies in a safe, ethical way against a backdrop of public mistrust. In the biggest scandal to date, consultancy Cambridge Analytica illicitly harvested personal data from Facebook to influence elections. In July, the US Federal Trade Commission approved a $5bn fine for the social media platform for privacy violations. Data trusts are being mooted as a way to allay concerns about how sensitive data is held by third partiesIn Los Angeles, residents have expressed concerns about the use of personal data collected from electric scooters, which is intended to help urban planning.Companies and governments tread a fine line between extracting information from data and ensuring they do not break laws such as the EU’s General Data Protection Regulation (GDPR) which forces any company holding personal data of an EU citizen to seek consent and delete the data on request. Individuals should not be identifiable from the data sets.These legal problems on privacy and governance were what law firm Pinsent Masons with BPE Solicitors had to contend with when advising the ODI on data trusts.The project to combat poaching looked at whether a data trust could improve the sharing of information and invoice data from researchers and governments, by monitoring documents given to border staff about species being transported across borders that can be falsified by smugglers. The data could be used to train algorithms to help border staff identify illegally traded animals.Mr Martin says setting up such a data trust could enable border officials to take photographs of a live animal and use software to check whether it is a species on which there are export restrictions.One advantage of a data trust is that it enables individuals to become trustees and have a say in how their anonymised data is used. It would allow citizens to be represented if the data trust held traffic information collected about their locality, for example.Data trusts might also encourage companies to put in data to enable them to work on projects where they have a common goal. “The big supermarkets could decide to set up a data trust to share data on, for example, tackling food waste or climate change,” Mr Martin says.Chris Reed, professor of electronic commerce at Queen Mary University of London, says data trusts are useful when multiple organisations put in data. “The sharing of data might have been subject to agreements between parties, but when you might have 100 companies putting in data you cannot have agreements covering them all. Having a data trust is a fair and safe way of doing this,” he says.Only a handful of data trusts exist. Credit card company Mastercard and IBM has formed an independent Dublin-based data trust called Truata. Connor Manning, a partner at law firm Arthur Cox, handled the corporate and trust structure documentation. He says that part of the legal complexity was designing the structure so that Mastercard was a beneficiary of the trust but the structure was not a standard company. “It is a corporate structure with a trust structure on top,” he explains.A data trust may not be the answer to every situation. Othe
Facebook is at pains to address such anxieties, but its initial outline of how the new currency will be used in Messenger and WhatsApp via Calibra is not exactly reassuring. At the very least, Facebook will know the people and companies with whom its users have financially interacted, but that is likely to only be the start. “Calibra will not share account information or financial data with Facebook, Inc or any third party without customer consent,” it says (my italics). Obviously, the company has past form here: the kind of forced consent that means you either allow Facebook to gobble up your data or don’t get access to many of its services. Whatever the guarantees, the most basic point is obvious enough: why should a company with such an appalling record on personal data be trusted to so massively extend its reach?
AbstractEmerging as a comprehensive and aggressive governance scheme in China, the “Social Credit System” (SCS) seeks to promote the norms of “trust” in the Chinese society by rewarding behavior that is considered “trust-keeping” and punishing those considered “trust-breaking.” This Article closely examines the evolving SCS regime and corrects myths and misunderstandings popularized in the international media. We identify four key mechanisms of the SCS, i.e., information gathering, information sharing, labeling, and joint sanctions, and highlight their unique characteristics as well as normative implications. In our view, the new governance mode underlying the SCS — what we call the “rule of trust” — relies on the fuzzy notion of “trust” and wide-ranging arbitrary and disproportionate punishments. It derogates from the notion of “governing the country in accordance with the law” enshrined in China’s Constitution.This Article contributes to legal scholarship by offering a distinctive critique of the perils of China’s SCS in terms of the party-state’s tightening social control and human rights violations. Further, we critically assess how the Chinese government uses information and communication technologies to facilitate data-gathering and data-sharing in the SCS with few meaningful legal constraints. The unbounded and uncertain notion of “trust” and the unrestrained employment of technology are a dangerous combination in the context of governance. We conclude with a caution that with considerable sophistication, the Chinese government is preparing a much more sweeping version of SCS reinforced by artificial intelligence tools such as facial-recognition and predictive policing. Those developments will further empower the government to enhance surveillance and perpetuate authoritarianism.Keywords: Social Credit, information and communications technologies, governance, social control, human rightsSuggested Citation:
These days, it’s not a shared drill that’s redefining trust and supplanting institutional intermediaries; it’s the blockchain. Botsman now says that the blockchain is the next step in shifting trust from institutions to strangers. “Even though most people barely know what the blockchain is, a decade or so from now, it will be like the internet,” she writes. “We’ll wonder how society ever functioned without it.”
The ambitious promises all sound very familiar.
The Trust & Technology Initiative brings together and drives forward interdisciplinary research from Cambridge and beyond to explore the dynamics of trust and distrust in relation to internet technologies, society and power; to better inform trustworthy design and governance of next generation tech at the research and development stage; and to promote informed, critical, and engaging voices supporting individuals, communities and institutions in light of technology’s increasing pervasiveness in societies.
Source: Trust & Technology Initiative
What blockchain does is shift some of the trust in people and institutions to trust in technology. You need to trust the cryptography, the protocols, the software, the computers and the network. And you need to trust them absolutely, because they’re often single points of failure.When that trust turns out to be misplaced, there is no recourse. If your bitcoin exchange gets hacked, you lose all of your money. If your bitcoin wallet gets hacked, you lose all of your money. If you forget your login credentials, you lose all of your money. If there’s a bug in the code of your smart contract, you lose all of your money. If someone successfully hacks the blockchain security, you lose all of your money. In many ways, trusting technology is harder than trusting people. Would you rather trust a human legal system or the details of some computer code you don’t have the expertise to audit?Blockchain enthusiasts point to more traditional forms of trust—bank processing fees, for example—as expensive. But blockchain trust is also costly; the cost is just hidden. For bitcoin, that’s the cost of the additional bitcoin mined, the transaction fees, and the enormous environmental waste.Blockchain doesn’t eliminate the need to trust human institutions. There will always be a big gap that can’t be addressed by technology alone. People still need to be in charge, and there is always a need for governance outside the system. This is obvious in the ongoing debate about changing the bitcoin block size, or in fixing the DAO attack against Etherium. There’s always a need to override the rules, and there’s always a need for the ability to make permanent rules changes. As long as hard forks are a possibility—that’s when the people in charge of a blockchain step outside the system to change it—people will need to be in charge.
It’s hard enough to get enterprises that compete with each other to work together as a team, but it’s especially tricky when one of those rivals owns the team.Shipping giant Maersk and tech provider IBM are wrestling with this problem with TradeLens, their distributed ledger technology (DLT) platform for supply chains.Some 10 months ago, the project was spun off from Maersk (the largest container shipping company on the planet) into a joint venture with IBM. But in that time the network has enticed only one other carrier onto the platform: Pacific International Lines (PIL), one of eight shipping lines in Asia and 17th in the world based on cargo volumes.As those involved admit, that’s not enough.
The recent financial crisis and, especially, anti-austerity policies, reflects and, at the same time, contribute to a crisis of representative democracy. In this article, I discuss which different conceptions of trust (and relations to democracy) have been debated in the social sciences, and in public debates in recent time. The financial crisis has in fact stimulated a hot debate on “whose trust” is relevant for “whose democracy”. After locating the role of trust in democratic theory, I continue with some illustrations of a declining political trust in Europe, coming from my own research on social movements, but also of the emergence, in theory and practices, of other conceptions of democracy and democratic spaces, where critical trust develops. Indignados’ movements in Spain and Greece as well as the Occupying Wall Street protest in the US are just the most visible reaction of a widespread dissatisfaction with the declining quality of democratic regimes. They testify for the declining legitimacy of traditional conceptions of democracy, as well as for the declining trust in representative institutions. At the same time, however, these movements conceptualize and practice different democratic models that emphasize participation over delegation and deliberation over majority voting. In doing this, they present a potential for reconstructing social and political trust from below.
At the tip of the hype cycle, trust-free systems based on blockchain technology promise to revolutionize interactions between peers that require high degrees of trust, usually facilitated by third party providers. Peer-to-peer platforms for resource sharing represent a frequently discussed field of application for “trust-free” blockchain technology. However, trust between peers plays a crucial and complex role in virtually all sharing economy interactions. In this article, we hence shed light on how these conflicting notions may be resolved and explore the potential of blockchain technology for dissolving the issue of trust in the sharing economy. By means of a dual literature review we find that 1) the conceptualization of trust differs substantially between the contexts of blockchain and the sharing economy, 2) blockchain technology is to some degree suitable to replace trust in platform providers, and that 3) trust-free systems are hardly transferable to sharing economy interactions and will crucially depend on the development of trusted interfaces for blockchain-based sharing economy ecosystems.