“Determine how trustworthy a person is in just one minute.” That’s the pitch from DeepScore, a Tokyo-based company that spent last week marketing its facial and voice recognition app to potential customers and investors at CES 2021.Here’s how it works: A person—seeking a business loan or coverage for health insurance, perhaps—looks into their phone camera and answers a short series of questions. Where do you live? How do you intend to use the money? Do you have a history of cancer? DeepScore analyzes the muscular twitches in their face and the changes in their voice and delivers a verdict to the lender or insurer. This person is trustworthy, this person is probably not.
Iyad Rahwan: How to trust machines?Machine intelligence plays a growing role in our lives. Today, machines recommend things to us, such as news, music, and household products. They trade in our stock markets and optimise our transportation and logistics. They are also beginning to drive us around, play with our children, diagnose our health. How do we ensure that these machines will be trustworthy? This lecture explores various psychological, social, cultural, and political factors that shape our trust in machines and pleads for the accomplishment of the challenges of the information revolution not only to be understood as a problem of computer science.
Previously we build Semaphore 148 allows static anonymous reputation system. Here we propose an expansion of Semaphore where we can destroy a users reputation without knowing their Identity. We use this to build a binary reputation system which can trivially be expanded to a non binary reputation system.
We have studied trust for 20 years and believe that it is the ultimate currency in the relationship that all institutions—companies and brands, governments, NGOs and media—build with their stakeholders. Trust defines an organization’s license to operate, lead and succeed. Trust is the foundation that allows an organization to take responsible risk, and, if it make mistakes, to rebound from them.For a business, especially, lasting trust is the strongest insurance against competitive disruption, the antidote to consumer indifference, and the best path to continued growth. Without trust, credibility is lost and reputation can be threatened.Edelman’s trust research, the Edelman Trust Barometer, turns the deep data we collect into real-world insights, and our trust consulting platform, Edelman Trust Management, interprets those insights to help our clients plan, make decisions and take action.
Source: Trust | Edelman
China’s plan to establish a social credit system (SCS) has aroused the concern of building a surveillance state. Yet this view oversimplifies and misunderstands the essence of the SCS. The highest priorities of the SCS are promoting economic credibility and reinforcing court orders. Meanwhile, the SCS aims to steer citizens’ social behaviors and interactions by utilizing a redlist system that introduces numerous moderate rewards. The SCS is also more lax in execution than in planning. It reflects a unique Chinese understanding of law, which treats law as a moral guide. This article also acknowledges the concerns for the SCS. Without actively preventing positive and negative invasions in the construction of the project, the SCS authorities will risk creating further mistrust in society.
As we use these services, they learn more and more about us. They see who we are, but we are unable to see into their operations or understand how they use our data. As a result, we have to trust online services, but we have no real guarantees that they will not abuse our trust. Companies share information about us in any number of unexpected and regrettable ways, and the information and advice they provide can be inconspicuously warped by the companies’ own ideologies or by their relationships with those who wish to influence us, whether people with money or governments with agendas.
To protect individual privacy rights, we’ve developed the idea of “information fiduciaries.” In the law, a fiduciary is a person or business with an obligation to act in a trustworthy manner in the interest of another. Examples are professionals and managers who handle our money or our estates. An information fiduciary is a person or business that deals not in money but in information. Doctors, lawyers, and accountants are examples; they have to keep our secrets and they can’t use the information they collect about us against our interests. Because doctors, lawyers, and accountants know so much about us, and because we have to depend on them, the law requires them to act in good faith—on pain of loss of their license to practice, and a lawsuit by their clients. The law even protects them to various degrees from being compelled to release the private information they have learned.
This week, Congress released a report on big tech monopolies that makes clear what so many Americans instinctively know: A handful of powerful corporations rule over our lives and our economy. The report details the actions the four big tech platforms — Amazon, Google, Facebook and Apple — have taken in gaining and preserving their monopoly power across numerous markets. (Amazon CEO Jeff Bezos owns The Washington Post.) The report’s prescription for undoing their power is just as clear: We must break them up. Alongside this essential recommendation, the report also calls for strengthening the antitrust laws and adopting new rules to ensure the dominant platforms do not exploit their power. If we fail to confront the tech monopolies head on, the report argues, we relinquish our control over the way we shop, sell and speak to one another.
The thesis of the Berg et al. effort is that the key problem (at the margin, at least) is trust. The optimizing organizational response involves institutional cryptoeconomics. As they note, the chief problem in cryptoeconomics is designing mechanisms for generating reliable consensus; this is rather technical, and involves code that embodies solutions to strategic problems. Institutional cryptoeconomics asks what institutional forms will best embed cryptoeconomic solutions organically and with little friction into their daily operations.Again, the authors recognize the significance of their claim. If they are right, the new solutions to the problems of trust are just as important, and as disruptive, as the creation of the joint stock corporation. That means that the transformation, if it occurs, will happen on a massive scale and at breathtaking speed. The institutional advantages of the joint stock corporation, in terms of raising large amounts of capital and monitoring and enforcing contracts, were such that the commercial world went from “no corporations” to “above a certain size, only corporations” within a century. By analogy, at this point, uses of blockchain protocols to solve large-scale commercial problems are nearly unknown, but, in a few years, no other form of organization will be viable.
Social trust is linked to a host of positive societal outcomes, including improved economic performance, lower crime rates and more inclusive institutions. Yet, the origins of trust remain elusive, partly because social trust is difficult to document in time. Building on recent advances in social cognition, we design an algorithm to automatically generate trustworthiness evaluations for the facial action units (smile, eye brows, etc.) of European portraits in large historical databases. Our results show that trustworthiness in portraits increased over the period 1500–2000 paralleling the decline of interpersonal violence and the rise of democratic values observed in Western Europe. Further analyses suggest that this rise of trustworthiness displays is associated with increased living standards. Quantifying how social trust evolved throughout history can help us understand the long-run dynamics of our societies. Here, the authors show an increase in displays of trustworthiness, using a face processing algorithm on early to modern European portraits.