The role of trust in economics and finance: Difference between revisions
(Created page with "{{DISPLAYTITLE:The role of trust in economics and finance}} Contribution of SIMONE GOZZINI =1. Introduction= Trust is a fundamental sentiment, the binding force behind...") |
No edit summary |
||
Line 3: | Line 3: | ||
Contribution of [[SIMONE GOZZINI]] | Contribution of [[SIMONE GOZZINI]] | ||
= | <span id="introduction"></span> | ||
= Introduction = | |||
Trust is a fundamental sentiment, the binding force behind modern societies: without it, no progress would have been possible. Trust is what permitted the birth of modern finance with the Buttonwood | Trust is a fundamental sentiment, the binding force behind modern societies: without it, no progress would have been possible. Trust is what permitted the birth of modern finance with the Buttonwood Agreement of 1792, it is what makes people able to rely on another person or organization without the continuous need to assess what the other party is doing and it is ultimately what permits the existence of modern democracies (Warren, 2018) . Without it, no meaningful relationship would be possible. However, in the last few years, a general disbelief about trust is permeating the civil society. Edelman is a global communication firm that conducts a comprehensive survey about trust every year. For 2022, the results picture a general sentiment of distrust across all segments of the population: 60% of the interviewed say that their default tendency is to distrust others, media are seen as a divisive and untrustworthy institution by around 50% of the people and trust in government is significantly dropping, year after year. The problem is particularly accentuated regarding governments, which are seen as unable to fix societies’ problems (Edelman, 2022). Distrust affects modern societies as a whole, impacting not only social relationships and the economy, but also human health: for example, lower trust in government has led to lower vaccinations against COVID-19, threatening society as a whole (Bajos et al., 2022). | ||
Distrust affects modern societies as a whole, impacting not only social relationships and the economy, but also human health: for example, lower trust in government has | |||
This paper highlights the importance of trust in modern economies and in the | This paper highlights the importance of trust in modern economies and in the financial world. Section [[#Concept|2]] describes the concept of trust, differentiating it from other human sentiments like cooperation and confidence. In general, the concept of risk is concerned, given that trust involves a sort of faith in someone or something. Section [[#Measure|3]] describes the various methodologies used in the literature to measure trust: trust games, surveys and the frontier of neuroscience. Section [[#Comparative|4]] presents trust as a source of comparative advantage in world trade patterns: societies with more trust have bigger and more productive firms. Section [[#stock|5]] studies how trust affects stock market participation: people with a higher tendency to trust are more likely to participate in the stock market and, conditional on participating, they invest a higher fraction of their wealth. Section [[#Money|6]] describes a general equilibrium model where money is seen as a substitute of trust: the allocation of resources in a trustworthy society can be reached also in a trust-less society which employs money. Section [[#Institution|7]] describes a stylized model of trust between individuals and an institution: the exchange of information among individuals is found to be a tool that improves the assessment of the true trustworthiness of an institution. Section [[#Trust_blockchain|8]] presents the blockchain technology as a new architecture of trust, describing also how trust can be enhanced to reach a higher diffusion and application of this technology. Section [[#Games|9]] presents various papers regarding trust games in the blockchain technology, considering in particular how to reach and improve the consensus process. Section [[#algorithms|10]] describes how algorithms, which are becoming more and more important in modern life, can be trusted: in particular, the author highlights transparency and accessibility as fundamental characteristics to enhance trust. Section [[#conclusion|11]] concludes. | ||
Section 2 describes the concept of trust, differentiating it from other human sentiments like cooperation and confidence. In general, the concept of risk is concerned, given that trust involves a sort of faith in someone or something. Section 3 describes the various methodologies used in the literature to measure trust: trust games, surveys and the frontier of neuroscience. Section 4 presents trust as a source of comparative advantage in world trade patterns: societies with more trust have bigger and more productive firms. Section 5 studies how trust affects stock market participation: | |||
= | <span id="Concept"></span> | ||
= The concept of Trust = | |||
According to the definition of Gambetta (2000), trust is ''“a particular level of the subjective probability with which an agent assesses that another agent or group of agents will perform a particular action, both before he can monitor such action (or independently of his capacity ever to be able to monitor it) and in a context in which it affects his own action”'' (Gambetta, 2000, p. 5). | |||
This definition highlights important concepts: | This definition highlights important concepts: | ||
Trust is relevant when the other agents (trustees) are free to betray the trustor, otherwise, if coercion intervenes, the outcome of the trust | * Trust is a probability ''p'', a threshold, but subjective: people engage in a trust relationship if they believe that the probability that the person will perform the particular action mentioned in the definition is higher than a certain level, which depends on the individual predisposition to trust and the circumstances under which the relationship is being created, like the cost of misplacing trust. | ||
* Trust is related to uncertainty: the underlying assumption is that the agent is not able to fully monitor the other agent while he performs the particular action (this is usually the case in practice), otherwise trust would not be necessary since the first could keep track of the second while performing the action. | |||
* It has an impact on the trustor, otherwise the actions of the second agent would not matter to the first, and engaging in a relationship would not be necessary. | |||
Trust is relevant when the other agents (trustees) are free to betray the trustor, otherwise, if coercion intervenes, the outcome of the trust relationship is known ex-ante. Assessing a probability would not be needed anymore and uncertainty would not be involved: the resulting interaction would not be a trust relationship, given that it lacks of its fundamental characteristics. Furthermore, also the trustor needs to be free to choose whether to engage in this relationship or to escape from it, otherwise also in this case assessing ''p'' would not be needed since he would have no choice. | |||
Therefore, trust is fundamentally a free choice between two individuals who seek mutual benefits and it involves a level of risk for the trustor, given that it affects his personal sphere of action. | Therefore, trust is fundamentally a free choice between two individuals who seek mutual benefits and it involves a level of risk for the trustor, given that it affects his personal sphere of action. | ||
== | <span id="cooperation-and-trust"></span> | ||
== Cooperation and trust == | |||
Trust, however, must not be confused with other human actions, sentiments, or beliefs. Trust is different from friendship, from passion, from loyalty and from cooperation. In particular, the relationship between the latter and trust is investigated extensively by the author. | Trust, however, must not be confused with other human actions, sentiments, or beliefs. Trust is different from friendship, from passion, from loyalty and from cooperation. In particular, the relationship between the latter and trust is investigated extensively by the author. | ||
Trust can be seen as: | Trust can be seen as: | ||
However, according to the author, there is no reason for saying that cooperation is a spontaneous equilibrium in human interaction: cooperation is just as likely as non cooperation. A predisposition to trust may be rational for humans in order to achieve their objectives, since trust is fundamentally an efficient way to achieve cooperation, but it is not necessary to wait for trust to evolve in order to initiate cooperation. Common interests and constraints can be enough and they can be beneficial especially in underdeveloped countries which present low level of trust. In fact, although trust and trustworthiness can be advantageous for an individual’s purposes, they cannot be artificially induced in a rational person. Moreover, the author argues that rational trustors and trustees may seek and present evidence to trust and | * a precondition for cooperation, which, together with sane competition is beneficial to foster human progress. However, although being probably the most efficient way to achieve cooperation, it is not a necessary condition: people have used surrogates in history to overcome the problem of the lack of trust, like coercion, contracts and promises. All of them have the objective of diminishing the possible alternatives that the trustor and the trustee can face, thus reducing the risk for both parties in engaging in this relationship. A higher level of trust increases the probability of cooperating, but it is possible that, even though the level of ''p'' is low, the result is cooperation anyway. This is because an agent takes also into consideration the cost and the benefit of engaging (or not) in such a relationship, the other alternatives he has and the specific situation. | ||
* an outcome of cooperation. Societies may engage in cooperation thanks to ''“a set of fortunate practices”'' (Gambetta, 2000, p. 10), particular circumstances and the need to satisfy mutual interests, for which the cost of not engaging in cooperation is higher than the risk of engaging. Trust is therefore the outcome of these practices and there is no need for prior beliefs about the trustworthiness of the other party, since trust will arise only after the beginning of the relationship, when information is collected. This statement is reinforced by the fact that cooperation exists in animals, that are unlikely to experience trust. | |||
However, according to the author, there is no reason for saying that cooperation is a spontaneous equilibrium in human interaction: cooperation is just as likely as non cooperation. A predisposition to trust may be rational for humans in order to achieve their objectives, since trust is fundamentally an efficient way to achieve cooperation, but it is not necessary to wait for trust to evolve in order to initiate cooperation. Common interests and constraints can be enough and they can be beneficial especially in underdeveloped countries which present a low level of trust. In fact, although trust and trustworthiness can be advantageous for an individual’s purposes, they cannot be artificially induced in a rational person. Moreover, the author argues that rational trustors and trustees may seek and present evidence to trust and be trustworthy. However, more information cannot fully solve the problem of trust. People, once they trust, do not try to find evidence to corroborate their belief, but rather they change their mind only if they find contrary evidence, which is not easy. | |||
<span id="the-concept-of-trust-in-organizations"></span> | |||
== The concept of trust in organizations == | |||
Mayer et al. (1995) start from the studies of Gambetta to further develop the understanding of trust in the context of organizations. In particular, they focus on the trust relationship between two individuals: a trustor who trusts or not an individual to perform a particular action and the trustee who receives the trustor’s trust, deciding then if fulfilling or not that action. The flow of trust is unidirectional: mutual trust between two parties is not developed in the paper, nor is trust in a social system. In particular, according to the authors, the concept of vulnerability is what is missing in the definition of Gambetta, given that ''“Trust is not taking risk per se, but rather it is a willingness to take risk.”'' (Mayer et al., 1995, p. 712). Then, trust is differentiated from different constructs such as: | |||
* Cooperation, which is intensively studied also by Gambetta. The authors highlight the fact that trust is not a conditio sine qua non for cooperation, since it is possible to cooperate with someone not trustworthy (for example when there are external controls and constraints like those discussed in the previous section). | |||
* Confidence: the main difference relies on the fact that, with trust, risk must be assumed, while in the second it is not necessary. Moreover, when a person chooses to trust, he will consider a set of possible alternatives, while that is not the case with confidence. | |||
* Predictability: trust and predictability are a way to cope with uncertainty but, if a person is predictable, it does not necessarily mean that it is worth putting trust in him. This is because it is possible to predict that the other person will consistently behave in negative ways (and the uncertainty is reduced), but no rational individual would put trust in him. | |||
Then, the characteristics of the trustor and the trustee are analyzed, which can together initiate a trust relationship between the two agents. The most important feature of the trustor is his propensity to trust another person, which is a personal trait constant over time and across situations. It is a general willingness to trust and it is not related to another party, since it is measurable before any interaction with the other agent. However, each trustor has different levels of trust for various trustees, which arise after the relationship is initiated. Therefore, they depend on the characteristics and actions of the trustee, i.e. his trustworthiness. According to the authors, there are 3 main characteristics of the trustee that are able to explain trustworthiness: | |||
* Ability: it is defined as the set of skills and competencies of the trustee over a specific domain. It is possible to trust another person to perform a particular action if the other agent is competent in that field otherwise he should not be trusted, even though he may be committed to completing the task. Therefore, trust should not be intended in absolute terms, but over a specific field of knowledge. | |||
* Ability: it is defined as the set of skills and competencies of the trustee over a specific domain. It is possible to trust another person to perform a particular action if the other agent is competent in that field otherwise he should not be trusted, even though he may be committed to | |||
* Benevolence: it is a personal trait of the trustee towards the trustor which is related to how much the former wants good for the latter. More benevolence leads to higher trust because the trustor can be more sure that the trustee will perform the action taking into account also his benefit and not only the trustee’s egoistic motives. | * Benevolence: it is a personal trait of the trustee towards the trustor which is related to how much the former wants good for the latter. More benevolence leads to higher trust because the trustor can be more sure that the trustee will perform the action taking into account also his benefit and not only the trustee’s egoistic motives. | ||
* Integrity: it is defined as “the trustor’s perception that the trustee adheres to a set of principles that the trustor finds acceptable” (Mayer et al., 1995, p. 719). A | * Integrity: it is defined as ''“the trustor’s perception that the trustee adheres to a set of principles that the trustor finds acceptable”'' (Mayer et al., 1995, p. 719). A trustee’s integrity therefore depends on what the trustor’s set of beliefs are. If the trustor thinks that the integrity of the trustee is not sufficient, he will not engage in a trust relationship with him. | ||
In particular, integrity will be central in the early stages of the relationship, before gaining any insights; then, benevolence will become important over time, as the trustor retrieves information during the course of the relationship; ability, instead, continues to stay important from the beginning to the end. After engaging in the trust relationship, the trustor will be able to gain new data and information, through which he can update his beliefs about these three characteristics of the trustor, eventually deciding whether the placement of trust is still reasonable. While a trustor tries to assess these characteristics of the trustee, the role of context becomes important because it affects ability (for example because a change in a situation may change the skills needed to complete a certain task), the level of benevolence (for example if the trustee changes | In particular, integrity will be central in the early stages of the relationship, before gaining any insights; then, benevolence will become important over time, as the trustor retrieves information during the course of the relationship; ability, instead, continues to stay important from the beginning to the end. After engaging in the trust relationship, the trustor will be able to gain new data and information, through which he can update his beliefs about these three characteristics of the trustor, eventually deciding whether the placement of trust is still reasonable. While a trustor tries to assess these characteristics of the trustee, the role of context becomes important because it affects ability (for example because a change in a situation may change the skills needed to complete a certain task), the level of benevolence (for example if the trustee changes his behavior during time) and integrity (for example because a certain action of the trustee is not interpreted as coherent with the trustor’s set of values only because it was obliged to do so by the specific situation). | ||
Finally, the authors deal with the risk involved in trusting. In particular, they | Finally, the authors deal with the risk involved in trusting. In particular, they highlight the fact that there is no risk taking in the propensity to trust, but risk arises only when an agent effectively engages in a trust relationship. However, the form and the level of risk assumed by the trustor will depend on the level of trust involved in a relationship: the more the trustor trusts the trustee, the more risk he will be willing to take. So, before initiating a trust relationship, the agent has to assess whether the level of trust is higher or lower than the perceived level of risk, so that he can decide whether it makes sense to engage in such a relationship. | ||
= | <span id="Measure"></span> | ||
= How to measure trust = | |||
Trust | Trust is therefore a fundamental device in human society and it is also important in economics and finance, as this paper will later explain. A natural question arises: ''how it is possible to measure trust?'' The question is not easy to answer, since trust is a human sentiment, therefore subjective and emotional, and which is also interwoven with other human sentiments and beliefs. review the major methods used by the literature to measure trust, highlighting the main limitations of each model. | ||
== | <span id="trust-games-and-game-theory"></span> | ||
== Trust Games and Game Theory == | |||
Experimental economics has intensively relied on game theory to quantify trust. The games mostly used nowadays are various | Experimental economics has intensively relied on game theory to quantify trust. The games mostly used nowadays are various versions of the TRUST GAME, which was invented by Berg et al. (1995). Alós-Ferrer and Farolfi (2019) describe it as follows: ''“A first agent, called the trustor, is given a monetary endowment X, and can choose which fraction p of it (zero being an option) will be sent to the second agent, called the trustee. The transfer p · X is then gone, and there is nothing the trustor can do to ensure a return of any kind. Before the transfer arrives into the trustee’s hands, the transfer is magnified by a factor K <math display="inline">></math> 1. The trustee is free to keep the whole amount without repercussion. Crucially, however, the trustee has the option to send a fraction q of the received transfer back to the trustor, hence honoring the trustor’s initial sacrifice”'' (Alós- Ferrer & Farolfi, 2019, p. 1). The transfer of the trustor can become a measure of trust, while the subsequent transfer of the trustee is a measure of trustworthiness. These games underline some important features of trust as described by Gambetta (2000): trustor and trustee decisions are free and voluntary, uncertainty and risk are involved and there are possible repercussions for the trustor (a loss in utility). | ||
However, despite the popularity of this method, there are various limitations that need to be addressed. In the | However, despite the popularity of this method, there are various limitations that need to be addressed. In the agents’ behavior, there might be possible motivational confounds that affect the measurement of trust and trustworthiness, like selfish or altruistic tendencies, efficiency reasons, or prior personal preferences (like inequity aversion). To address this problem, the authors suggest taking as a measure the difference between the transfers in the trust game and those in a game called ''the Dictator Game'' (i.e. a game where the proposer’s decisions are implemented without the possibility for the responder to do something). | ||
Then, the question of risk attitudes of the agents is addressed. Trust involves a certain risk (given that the trustor cannot monitor the response of the trustee) so the aptitudes towards risk may affect the monetary transfers. The evidence is mixed, with early studies (like Houser et al., 2010) finding no relationship between risk attitudes and trust and with others finding a correlation. The lack of agreement might be due to the concept of risk involved in the trust games themselves, which is not a pure financial risk, but a betrayal aversion, i.e. the risk and fear of being betrayed by another human being. Taking into account this, the authors mention the study of Bohnet and Zeckhauser (2004), where a “betrayal aversion” was found in the decisions of the agents, different from the standard risk aversion. Therefore, to disentangle the trust component and the risk component of the agents’ transfers, standard measures of risk might not fit properly. The authors, however, criticize also the use of game variants to address this issue, since new measures may capture other undesired effects. | Then, the question of the risk attitudes of the agents is addressed. Trust involves a certain risk (given that the trustor cannot monitor the response of the trustee) so the aptitudes towards risk may affect the monetary transfers. The evidence is mixed, with early studies (like Houser et al., 2010) finding no relationship between risk attitudes and trust and with others finding a correlation. The lack of agreement might be due to the concept of risk involved in the trust games themselves, which is not a pure financial risk, but a ''betrayal aversion'', i.e. the risk and fear of being betrayed by another human being. Taking into account this, the authors mention the study of Bohnet and Zeckhauser (2004), where a “betrayal aversion” was found in the decisions of the agents, different from the standard risk aversion. Therefore, to disentangle the trust component and the risk component of the agents’ transfers, standard measures of risk might not fit properly. The authors, however, criticize also the use of game variants to address this issue, since new measures may capture other undesired effects. | ||
Another problem can arise when there are changes in the parameters, implementation and description of the trust game: the | Another problem can arise when there are changes in the parameters, implementation and description of the trust game: the responses of the agents might not be consistent in all contexts, thus creating an impossibility of comparability between different experiments. For instance, increasing the multiplier K will likely increase the trustor’s transfer and also the fraction returned by the trustee according to Lenton and Mosley (2011). Moreover, also the way that the game is framed can have an impact: Burnham et al. (2000) show that the responses of the agents involved depend on whether, in the instructions of the game, the other agent was called partner or opponent. In the former case, the trustor trusted more the trustee than in the latter case. However, if the game is not framed at all the participants might create their own frame, thus interpreting the play in different and unpredictable ways, conducting to biased results. | ||
== | <span id="sec:surveys"></span> | ||
== Surveys == | |||
Another possible measure of trust relies | Another possible measure of trust relies on the use of surveys. The most important example is the General Social Survey (GSS) of the U.S. National Opinion Research Center. The question asked is: ''“Generally speaking, would you say that most people can be trusted or that you can’t be too careful in dealing with people?”''. The possible answers are: “Most people can be trusted” or “Can’t be too careful” or “I don’t know”. This question is used also in other important surveys, like the EVS (European Values Survey), the WVS (World Values Survey), the BHPS (British household panel study) and the ANES (American National Election Studies). | ||
This method is not immune from problems. For example, the interpretation of each individual might play a role in the response, as seen | This method is not immune from problems. For example, the interpretation of each individual might play a role in the response, as seen in the Trust Game. Moreover, the relationship between these two methods should be taken into account. Ideally, if both were valid and consistent, the responses should be highly correlated. However, the evidence is mixed. Glaeser et al. (2000) find no correlation between the two measures while Fehr et al. (2003) find evidence of the contrary. An explanation could be that surveys test a general propensity to trust, while Trust Games measure a specific strategic situation of the agents’ behavior. The concept of trust is therefore not uniquely determined and different methodologies might capture different aspects of this complex human attitude. | ||
Moreover, the authors suggest that, if surveys are used as a measure, one must take into account various controls (like culture, geography and age) to interpret and | Moreover, the authors suggest that, if surveys are used as a measure, one must take into account various controls (like culture, geography and age) to interpret and therefore compare the responses. | ||
== | <span id="neuroscience"></span> | ||
== Neuroscience == | |||
The new frontier in the measurement of trust is represented by neuroscience, which tries to give more objective and biological methods. | The new frontier in the measurement of trust is represented by neuroscience, which tries to give more objective and biological methods. | ||
Firstly, the relationship between | Firstly, the relationship between oxytocin (OT) and trust is investigated, in particular to link OT levels with the behavior in the Trust Game. Zak et al. (2005) find that OT levels can predict trustee trustworthiness but not trustors’ transfers. However, when the change in OT levels is endogenous (i.e. natural, like in the paper mentioned above), the studies cannot establish causality. Hence, another set of studies, where the level of OT was exogenously determined, is examined. Kosfeld et al. (2005) find that the treatment group in their experiment (i.e. the people whom OT was administered) presents larger trustors’ transfers compared to the control group, but no significant differences in the trustees’ transfers. Moreover, their results suggest that OT causally increases trust through a reduction of betrayal aversion and it does not increase risk-taking behavior or prosocial aptitudes in general. The two methods of investigation lead therefore to inconsistent result with each other. Therefore, no conclusion can be reached: the relationship of OT with trust and trustworthiness is not simple as previously thought. | ||
Finally, the authors introduce the latest | Finally, the authors introduce the latest studies about the use of brain imaging to understand where trust comes from and how it forms. This might be useful to develop more reliable measures of trust in the future. | ||
= | <span id="Comparative"></span> | ||
= Trust as a source of comparative advantage = | |||
Cingano and Pinotti (2016) study the effect of trust on firm organization and on comparative advantage. The authors argue that interpersonal trust means more | Cingano and Pinotti (2016) study the effect of trust on firm organization and on comparative advantage. The authors argue that interpersonal trust means more delegation of decisions within a firm, resulting in a larger firm size and in the expansion of more productive units. If trust is established, it is possible to expand the firm outside familiar and friendly relationships, thus using the firm’s own productivity advantage over a larger amount of input, given that the firm is bigger and has more factors of production. The principal-agent problem (that comes with delegation and prevents a higher level of it) can be partially solved by this human device. In particular, higher delegation causes higher productivity through: | ||
* higher exploitation of the informational advantage of the managers and of | |||
* higher exploitation of the informational advantage of the managers and of specific skills of some workers. | |||
* the reduction of information costs. | * the reduction of information costs. | ||
* more resiliency and ability to cope with changes in profit and growth | * more resiliency and ability to cope with changes in profit and growth opportunities. | ||
Studying a sample of Italian and European companies, the authors find that trust, together with human capital and intangible intensity, is associated with greater delegation, which, in turn, is associated with larger firm size. Their findings suggest that high-trust countries present a higher value-added per worker and higher exports in industries where delegation is needed, thus making trust a source of comparative advantage in trade patterns. This effect is the result of a reduction of smaller size firms towards bigger size firms. | |||
The authors test their hypotheses through empirical data obtained with surveys. They retrieve data from: | |||
* The INVIND survey from the Bank of Italy, which provides information about inputs, outputs, internal organization and governance of a sample of more than 6500 firms. These data are used to test trust differences across Italian regions. | * The INVIND survey from the Bank of Italy, which provides information about inputs, outputs, internal organization and governance of a sample of more than 6500 firms. These data are used to test trust differences across Italian regions. | ||
* The World Values Survey (WVS) and the European Social Survey (ESS) to measure interpersonal trust and delegation. | * The World Values Survey (WVS) and the European Social Survey (ESS) to measure interpersonal trust and delegation. | ||
* The OECD Structural Analysis Database (STAN) and the OECD Business | * The OECD Structural Analysis Database (STAN) and the OECD Business Demographic Statistics, which provide information about value added per worker, organization and the number of workers of European firms. | ||
The analysis starts | The analysis starts with the following regression: | ||
<math display="block">Y_{jr} = \alpha + \beta(Trust_r \times Delegation_j) + \delta X_{jr} + \mu_r + \mu_j + \varepsilon_{jr}</math> | |||
Where <math display="inline">Y_{jr}</math> is industry specialization (measured through value added per worker or exports), <math display="inline">Trust_r</math> is the average level of trust, <math display="inline">Delegation_j</math> is a measure of the need for delegation in each industry and <math display="inline">X_{jr}</math>, <math display="inline">\mu_r</math> and <math display="inline">\mu_j</math> are controls respectively for other determinants of specialization and geographical factors. | |||
<math> | Then, the authors estimate <math display="inline">Delegation_j</math> through the following regression: | ||
<math display="block">Centers_{jr} = \eta + \theta lnL_{ijr} + \mathit{f_j} + \mathit{f_r} + \mathit{v_{ijr}}</math> | |||
Where <math display="inline">Centers_{jr}</math> is the number of responsibility centers (which is a measure of delegation inside firms), <math display="inline">lnL_{ijr}</math> is the log of the number of workers (which is kept fixed) and <math display="inline">\mathit{f_j}</math> and <math display="inline">\mathit{f_r}</math> are firm’s controls. | |||
In particular, the analysis shows that, for the Italian sample, higher trust leads to an increase in the production of delegation intensive industries. Starting with the log of value added per worker as the dependent variable, the authors add a series of controls. Introducing human capital, the calculations show that it remains the main source of the pattern of specialization but, despite being correlated with delegation (which in turn has an effect on trust), the latter variable remains statistically significant. Then, two other controls are introduced: financial development and judicial quality. However, they do not affect the coefficient of trust, thus making the estimation more robust and consistent. The results are similar when the dependent variable is export.<br /> | |||
For the international sample, the analysis is more complicated because different countries present different institutional dimensions, like labor market regulations and property protections. The results, however, are very similar, making their thesis consistent also at the international level. | |||
<span id="stock"></span> | |||
= Trust and the stock market = | |||
Guiso et al. (2008) study the effect of trust on stock market participation across individuals and across countries. Starting from Gambetta (2000), they define trust as “the subjective probability individuals attribute to the possibility of being cheated” (Guiso et al., 2008, p. 2557), which depends on the characteristics of the financial system and the individual priors and predisposition to trust. | Guiso et al. (2008) study the effect of trust on stock market participation across individuals and across countries. Starting from Gambetta (2000), they define trust as ''“the subjective probability individuals attribute to the possibility of being cheated”'' (Guiso et al., 2008, p. 2557), which depends on the characteristics of the financial system and the individual priors and predisposition to trust. | ||
Firstly, they develop a theoretical model in which they reproduce the effect of trust on portfolio decisions, starting with a two asset model (one safe asset and one stock). They assume that investors know the distribution of returns but they are worried, with a level of subjective probability p, about other | Firstly, they develop a theoretical model in which they reproduce the effect of trust on portfolio decisions, starting with a two asset model (one safe asset and one stock). They assume that investors know the distribution of returns but they are worried, with a level of subjective probability ''p'', about other bad events, like the possibility of fraud perpetrated by their broker, which will lead to 0 return in the stock. They also assume 0 participation cost. Given a level ''W'' of wealth, being <math display="inline">\tilde{r}</math> the return in the stock investment and <math display="inline">r_f</math> the risk free rate, each of the agents chooses a share <math display="inline">\alpha</math> of their wealth to invest in the risky asset so that they can maximize their expected utility <math display="block">Max (1-p)EU(\alpha \tilde{r} W +(1-\alpha)r_f W) + pU((1-\alpha)r_f W).</math> | ||
<math>\ | They also calculate that a risk averse individual will invest in the stock market if his subjective probability p > <math display="inline">\bar{p}</math>, where <math display="inline">\bar{p}</math> is <math display="inline">\bar{p}</math> = (<math display="inline">\bar{r}</math> - <math display="inline">r_f</math>)/<math display="inline">\bar{r}</math> and <math display="inline">\bar{r}</math> is the mean of the true distribution of the returns of the stock. This last relationship comes from the fact that an investor invests in a risky asset if the expected return of investing is higher than the risk free rate, i.e. (1 - ''p'') <math display="inline">\times</math> <math display="inline">\bar{r}</math> + ''p'' <math display="inline">\times</math> 0. | ||
An important result of this model is that the decision to participate or not in the stock market depends on the subjective probability ''p'' of being cheated (since it reduces the expected return of the investment) and it does not depend on the level of ''W''. Since ''W'' is not significantly correlated with trust (as calculated through the survey data they use in their empirical analysis), this can explain why also the wealthy might not engage in stock trading. Moreover, <math display="inline">\alpha</math> itself depends on the level of trust: more trust means more wealth invested in risky assets and vice-versa. | |||
Then, participation costs are introduced in the theoretical model. To enter the market, the investor now has to pay a fixed cost ''f'' (thus reducing the allocable wealth to ''W'' - ''f''). As ''f'' increases, in order to invest in stocks, a higher level of trust is necessary (<math display="inline">\bar{p}</math> decreases). In particular, less trust reduces the return on stock investment (thus making the participation less attractive) because it reduces the share of wealth invested in stocks and it reduces the expected utility from participating. | |||
Finally, the authors demonstrate that risk tolerance and trust are two different things by looking at the optimal amount of stocks: this number increases with trust and it increases also with risk aversion (for the benefits of diversification). Therefore, since risk tolerance reduces the optimal number of stocks and the contrary is true for trust, the latter cannot be a proxy of the former. As the empirical analysis will demonstrate, this is consistent with the data. This result is also reinforced by the fact that the authors find that individuals with high levels of trust buy more insurance, while risk tolerant individuals buy less. | |||
The authors use survey data to test their model. In particular, they employ the DNB Household Survey (to which they have directly contributed), which maps about 1990 individuals and tries to capture their level of generalized trust, their risk and ambiguity aversion and their optimism. It also reports some statistics about households’ assets, distinguishing in particular between listed and unlisted stocks and securities held directly or through financial intermediaries. To measure generalized trust, this survey uses the same question as the World Values Survey (see section [[#sec:surveys|3.2]] for explanation); to measure risk aversion and ambiguity aversion, the authors ask the interviewed their willingness to pay for some lotteries; to measure optimism, they ask to quantify their agreement (on a scale from 1 to 5) with the following statement: ''“I expect more good things to happen to me than bad things”''. Then, the Italian Bank customers survey is used to capture the ''personalized trust'', i.e. the trust that an individual has towards its financial intermediary, which could be different from the general propensity to trust. This data set contains information about the financial assets the interviewed have and their demographic characteristics. More importantly, to measure personalized trust, the survey asks the following question: ''“How much do you trust your bank official or broker as financial advisor for your investment decisions?”''. | |||
The empirical analysis confirms their hypothesis. Starting from the study of the relationship between generalized trust (i.e the level of trust measured in the survey) on stock market participation, the authors find that trust has a positive and highly significant coefficient (so more trust means more participation), even after controlling for a number of variables (like age, sex and wealth). In particular, ''"Trusting others increases the probability of direct participation in the stock market by 6.5 percentage points”'' (Guiso et al., 2008, p. 2578). Risk aversion and ambiguity aversion do not seem significant, as well as optimism, since the coefficient of trust remains unchanged. Moreover, when studying the effect of wealth, the authors find that the coefficient of trust remains significant even after controlling for this variable, thus providing a proof for their previous statement: the lack of trust may be an explanation for the fact that rich people do not invest in stocks even though they should not be affected by the participation costs. Then, the relationship between trust and the amount invested in risky assets is studied. The result confirms, again, the hypothesis: ''"Individuals who trust have a 3.4 percentage points higher share in stocks, or about 15.5% of the sample mean"'' (Guiso et al., 2008, p. 2580). The same results hold for risky assets in general: risk and ambiguity aversion are not statistically significant also in this case. However, a significant control is represented by the level of education. The authors find that trust increases the holding of risky securities for everyone, but less in more educated people, since they know better how the market works with respect to the less educated and they are less affected by priors and cultural stereotypes. | |||
The empirical analysis confirms their hypothesis. Starting from the study of the relationship between generalized trust (i.e the level of trust measured in the survey) on stock market participation, the authors find that trust has a positive and highly significant coefficient (so more trust means more participation), even after controlling for a number of variables (like age, sex and wealth). In particular, | |||
risk and ambiguity aversion are not statistically significant also in this case. However, a significant control is represented by the level of education. The authors find that trust increases the holding of risky securities for everyone, but less in more educated people, since they know better how the market works respect to the less educated and they are less affected by priors and cultural stereotypes. | |||
Considering now the Italian Banks costumers survey, the results confirm the previous ones: trust in one’s own financial intermediary increases the probability of investing in stock and the share of the wealth allocated in this type of security. | Considering now the Italian Banks costumers survey, the results confirm the previous ones: trust in one’s own financial intermediary increases the probability of investing in stock and the share of the wealth allocated in this type of security. | ||
Finally, the authors investigate the implication of the level of trust on market | Finally, the authors investigate the implication of the level of trust on market participation across countries. The analysis is based on the following statement: less trust should mean that agents are less willing to invest and, in turn, firms will be less willing to float their equity given that it is less rewarding. Therefore, countries with lower levels of trust should have lower participation in the market. The empirical analysis confirms the previous claims: trust has a positive and significant effect on stock ownership among individuals and it has also a positive effect on stock market capitalization. | ||
<span id="Money"></span> | |||
= Money as a substitute for trust = | |||
Gale (1978) develops a theoretical model to study the effect of the introduction of money in an economy characterized by a lack of trust between its agents. The author starts from the Arrow-Debreu model of Walrasian equilibrium. This model is characterized by a finite number of consumers (who have an initial endowment of resources) and commodities, perfect competition in all markets and constant return to scale. Moreover, markets are complete, which means that all transactions in the economy can be arranged at one time. This is made possible because transactions that involve the delivery of a commodity in a different time period (i.e. in t=0 a commodity is sold but the delivery will be arranged in t=1) can be concluded through contracts at time t=0. The contract specifies that the delivery will occur in t=1, even though the transaction itself is completed in t=0. Therefore, the contract is seen as the commodity being traded. This mechanism operates under the assumption that there is no uncertainty in the market. In such an environment, agents can trust each other to fulfill the contracts they have agreed upon. As a result, there is no need to distinguish between the contracts and their execution. Nevertheless, if for some reason agents start not to trust each other, and therefore uncertainty arises, some agents may prefer not to fulfill their promise and other agents, anticipating that, might not engage in a transaction in the very first place. If trust were to vanish, therefore, the allocation process would break down if no other substitutes were found. The scholar demonstrates that money can be a substitute for trust and it can permit the allocation and redistribution of resources even in the absence of trust. | |||
To | To illustrate that formally, the author employs the concept of core that is ''“the set of attainable allocations such that (a) neither agent can make himself better off by remaining self-sufficient and (b) two agents cannot both be made better off by any feasible redistribution of their joint endowment.”'' (Gale, 1978, p. 459), through which he develops the concept of sequential core to integrate time periods and uncertainty about the outcome of a contract. An allocation of commodities is trustworthy if the sequential core applies to it, that is if it cannot be improved by any redistribution of resources in any time period. If this were not the case, an agent would have the incentive to break the contract in later periods. Therefore, any exchange of commodities without trust would not form a sequential core, because agents would have the incentive to deviate from equilibrium to increase their own utility. | ||
== | To resolve this issue, the author introduces money in the model. In particular, each agent is given an endowment of money at time t=0 and it is assumed that at the end of time t=1 (the second and last period) the same amount of money must be returned as a tax. Implicitly, the model introduces a social institution (for example a government) that issues fiat money, which has no intrinsic value but it is guaranteed by the imposition of the government itself (this is the case in modern economies). In between the two periods, the agents can exchange money among themselves. This solves the issue: the agents who were reluctant to keep their promises in the model without money and without trust now have an incentive to fulfill the contract, given that they need the money to pay their taxes. Money does not restore trust among agents, but they act as a substitute, a way to enforce previous contracts and agreements. The possibility for the government to directly intervene in the fulfillment of contracts should be discarded, since it is not plausible that a human institution could be so almighty that it can oversee every transaction in a complex economy. Therefore, money can create the conditions for trustworthy transactions (without trust) in a decentralized way. However, the institution must be able to credibly impose the payment of taxes, otherwise agents would face the same problem as before. To do that, penalties for those who do not want to pay taxes should be sufficiently ''gruesome'', but the author does not quantify the penalty. Moreover, the author argues that, despite money can substitute trust, there could be a loss in overall utility with respect to the case with trust. In the model, the social institution is introduced without any explicit cost, but this is unlikely to be the case in reality, since introducing a government that is able to enforce tax payment and issue securities is certainly not free. | ||
<span id="the-gruesome-penalty"></span> | |||
== The gruesome penalty == | |||
Grimes (1990) continues the work of Gale (1978) in analyzing the role of money in the same theoretical framework studied by the previous author. The results of Gale are confirmed: without money, the outcome of an economy without trust would be autarky, since no transaction can effectively occur. With the introduction of money, however, it is possible to replicate the allocation of the economy with trust. The contribution of his work with respect to the research of his predecessor is about a quantification of the ''gruesome'' penalty that agents face when they do not respect their tax obligations. | |||
In particular, the author shows that the simple introduction of money does not necessarily replicate the outcomes of an economy without trust, because a sufficient incentive (i.e. a penalty higher than a certain threshold) to make inefficient for the agents not to fulfill their promises must be introduced. Under this threshold, the increase in utility derived from reneging the contract is higher than the reduction in utility due to the penalty. Therefore, the optimal choice is not to fulfill the agreement. On the contrary, above that threshold, the optimal choice is to fulfill the contracts (therefore replicating the allocation with trust). It is worth noting that the intensity of the penalty has no effect on the final allocation of goods, since they are already Pareto-efficiently allocated, but the author shows that this has an impact on prices. | |||
<math>q(0) > | To calculate the threshold, the author considers a world with two agents, two periods, no uncertainty and one good in each period. Each agent’s endowment in each period is defined as <math display="inline">(1-\lambda, \lambda)</math> and <math display="inline">(\lambda, 1- \lambda)</math>, where <math display="inline">\lambda</math> is a small positive number. The other features of this world are the same described in the previous paragraph. His calculations show that, to replicate trust, the maximum penalty should be: <math display="block">q(0) > (1-\lambda)/\lambda,</math> where <math display="inline">q(0)</math> is the maximum penalty and <math display="inline">\lambda</math> is the small positive number described above. | ||
<span id="Institution"></span> | |||
= Trusting an institution = | |||
= | Meylahn et al. (2023) study the dynamics of the trust between individuals and institutions using a stylized model of social network learning. Firstly, the authors define a model to describe the relationship between only one individual and the institution, in which the agent has repeated opportunities to place trust. The institution’s behavior is modeled by a parameter <math display="inline">\theta</math> that represents its trustworthiness, i.e. the probability that the institution honors the trust placed by the individual. So, in each round the institution honors the trust that has been placed by the agent with probability <math display="inline">\theta</math> and abuses it with probability <math display="inline">1 - \theta</math>. Similarly, the agent, in each round, can decide whether or not to place trust in the institution. The decisions taken by the two are independent in each round and the agent observes the actions of the institution only when he places trust. If trust is honored, he gains ''r'', while if it is abused he loses <math display="inline">-c</math>. Therefore, his expected utility is <math display="inline">r\theta - c(1-\theta)</math>. The agent behaves with myopic rationality, so he maximizes the expected utility in each round without taking into consideration future rounds. Moreover, the agent starts the interaction with the institution having a prior belief ''P0'', which is a function of <math display="inline">\alpha</math> and <math display="inline">\beta</math>, which can be considered the number of times trust was honored and betrayed in a past setting, before the beginning of the experiment. The variables of interest are <math display="inline">\tau</math>, the number of rounds after which the agent decides not to place trust anymore, through which determining the probability of quitting, and ''q'', the expected time spent playing before quitting. In each round, the agent updates his knowledge by taking into consideration the actions taken by the institution and, therefore, he updates its estimation of <math display="inline">\theta</math>. If the agent quits, he will never trust the institution again, given that there is no possibility to update his estimation of the trustworthiness of the institution. | ||
Then, the authors define another model where another agent is added: the relationship between the two plays an important role in determining the relationship with the institution. The agents’ behavior and the institution’s behavior share the same characteristics as the model with one agent: the agents choose in each round whether to place trust or not, they have a prior belief and the institution decide whether to honor or betray the agents’ trust. The authors further assume that both agents share the same prior. The key feature of this model is that each agent, in each round, receives information from the other, through which he can update his information. Two cases are analyzed: | |||
* Agents fully communicate with each other the interactions they have with the institution. Given that the agents have the same prior and the same information available, they will have the same estimate of <math display="inline">\theta</math>. | |||
* | |||
* Agents do not communicate explicitly, but they only observe the actions of the other agent. Therefore, the information received from the other agent will be incorporated only a round later. | * Agents do not communicate explicitly, but they only observe the actions of the other agent. Therefore, the information received from the other agent will be incorporated only a round later. | ||
They run their model 4000 times for the single agent model and 2000 times for the dual | They run their model 4000 times for the single agent model and 2000 times for the dual agents model, for a maximum of 500 rounds. They find that the probability of quitting in most of the settings (i.e. in various calibrations of the parameters) is higher in the single agent model. When considering only the two agents model, the probability is higher when the agents can only observe the actions of the other but they are not able to fully communicate. However, there are some exceptions and in some simulations the observable actions setting outperforms the full communication model, thus having a lower probability of quitting. The expected time to quit is lower in the two agents model with respect to the case where there is one agent only, in particular in the model when they fully communicate (in which therefore they receive more information). This is due to the fact that having more information will make their estimations more precise: they either quit quickly or they do not, since they need less time to have a good estimation of <math display="inline">\theta</math> and, if the estimation is not high enough, they will quit after fewer round, otherwise they are likely to place trust indefinitely. | ||
They find that the probability of quitting in most of the settings (i.e. in various calibrations of the parameters) is higher in the single agent model. When considering only the two agents model, the probability is higher when the agents can only observe the actions of the other but they are not able to fully communicate. However, there are some exceptions and in some simulations the observable actions setting outperforms the full communication model, thus having a lower probability of quitting. | |||
The expected time to quit is lower in the two agents model respect to the case where there is one agent only, in particular in the model when they fully communicate (in which therefore they receive more information). This is due to the fact that having more information will make their estimations more precise: they either quit quickly or they do not, since they need less time to have a good estimation of | |||
Overall, the authors find that communication is always helpful since it increases the probability of continuing to trust a reliable institution and | Overall, the authors find that communication is always helpful since it increases the probability of continuing to trust a reliable institution and decreases the expected time of quitting an untrustworthy institution. Moreover, they find that more optimistic priors increase the possibility of trusting a trustworthy institution. Finally, they highlight that it is not possible to say which of the two agents model is better, since it depends on the parameters setting and which criterion taking into consideration. | ||
= | <span id="Trust_blockchain"></span> | ||
= Trust and the blockchain = | |||
As highlighted before, trust, with its dynamics, is fundamental in every aspect of a society and it is what permits societies in themselves to evolve and transform. Without trust, each individual would have the burden of verifying the reliability of every other agent he encounters, which would be impossible. Trust is also what permitted the birth of modern finance, with the Buttonwood agreements of 1792 that led to the creation of the stock market. In recent years, however, trust within modern societies is decreasing, putting at risk the way the society in itself operates. People not only do not trust each other anymore, but they also do not trust the government, or the media, or any other authority that once was considered credible and reliable. It is in this framework that ''“a new architecture of trust”'' was developed, leading to the birth of bitcoin and the blockchain technology in 2009. Werbach (2018) analyzes the relationship between trust and the blockchain in his book ''“The blockchain and the new architecture of trust”''. | |||
<span id="what-are-the-blockchain-and-bitcoin"></span> | |||
== What are the blockchain and Bitcoin == | |||
The blockchain is a distributed and decentralized digital ledger (i.e. a record of accounts) that records transactions across a network of computers in a secure, transparent, and tamper-proof manner. In a blockchain, transactions are grouped into blocks, which are linked together in a chronological and linear order, forming a chain of blocks. Each block contains a list of transactions, a timestamp, and a reference to the previous block in the chain, creating a verifiable record of all transactions that have ever occurred on the network. One of the key features of a blockchain is its consensus mechanism, which ensures that all participants in the network agree on the state of the ledger. Once a block is added to the blockchain, it is considered immutable, meaning that the data in the block cannot be altered or deleted without the consensus of the majority of the network. This makes blockchains secure and resistant to tampering or manipulation. The transactions registered on the blockchain are performed through smart contracts, which are pieces of code that execute a predetermined function, like transferring a bitcoin, with no possibility to alter the agreement. Finally, a cryptocurrency is a digital currency that runs on the blockchain network. | |||
Bitcoin, introduced by Nakamoto (2009), was the first digital currency and the first example of the blockchain. It relies on 3 elements: cryptography, digital cash and distributed systems. Cryptography can be considered as the science of secure communications and it is employed for this purpose in the blockchain technology. Each agent that interacts with Bitcoin is identified with a private key associated with a public key through the mechanism of cryptography, so that each transaction can be verified and associated with a user without the need to disclose his private key. What is called coin is in reality a chain of signatures of verified transactions. Bitcoin comes from the unspent output of previous transactions, all register on the blockchain. Each transaction is verified by a network of nodes (i.e. a participant in a distributed network that maintains a copy of the blockchain ledger and participates in the consensus process). All the agents need to trust the state of the ledger: this is achieved by the consensus mechanism. Consensus comes from a process called mining, in which agents compete to verify the transactions and create a new block of the blockchain, in exchange for a reward (transaction fees and newly mined bitcoins). The winner is randomly decided, but all the other agents verify independently that the new block is legitimate. Being untrustworthy is not profitable: mining is an expensive activity, because miners engage in a proof of work system, where they have to solve a cryptographic puzzle to have the right to validate the transaction. This requires energy and money and the more energy and money an agent put into mining, the more chances he will have to win. The benefits of cheating are much lower than the costs, so in this way each agent can trust the state of the ledger because there are no incentives to deviate. Finally, the consensus mechanism has also the objective to make the ledger immutable because each transaction is recorded from the hash of the previous block. Changing a past block would mean forking the chain, and this would be rejected by the majority of users. Only in the case that an agent has more than 50% of the computing power (which is almost impossible) this change would be viable. | |||
<span id="a-new-form-of-trust"></span> | |||
== A new form of trust == | |||
The innovation of the blockchain is connected to the fact that every participant can trust the information recorded on the ledger without necessarily trusting another agent to validate it. There is no need for a central authority to validate the transactions and trust is reinforced by the fact that there are mechanisms that make impossible to alter the transactions already recorded on the ledger. The idea of Satoshi Nakamoto was to design a system that, through incentives, made the needs and objectives of every participant aligned with each other, so that what is recorded on the ledger can be trusted without trusting (or knowing) the other agents. Nakamoto claimed to have eliminated the need for trust but, according to Werbach (2018), that would be impossible. What Nakamoto created is trust in ''“a new architecture of trust”'', where independent agents run this technology, validating the transactions so that they can be recorded on the ledger. This is reinforced by the fact that distributed ledger networks make people work together in a way that otherwise would not have been possible since they would not have trusted each other sufficiently. | |||
To better understand what he means by a ''“new architecture”'', the author firstly outlines the various architectures (which define as "the ways the components of a system interact with one another" (Werbach, 2018, p. 25) ) of trust that humans have developed over time. The main architectures are: | |||
* Peer to peer (P2P): here, trust is based on a face to face relationship that arises because the agents share ethical norms and mutual commitment. The downside of this architecture is that this is possible for only a few people and small communities, given that the knowledge of each other is pivotal in creating trust. | |||
* Leviathan: this vision starts from the belief that humans are not fully trustable and therefore a powerful third party, the state/government, is needed to enforce private contracts and property rights. This is achieved through the monopoly of violence held by the state: people can now trust each other because, if something goes wrong, the leviathan can punish the guilty and enforce previous commitments. | |||
* Intermediaries: transactions are guaranteed by a third party (different from the government), which is trusted to perform certain actions. They create the possibility to perform certain transactions that in a peer to peer network would have been difficult: the other agent is trusted because there is an intermediary that makes the transaction happen. Examples are e-commerce platforms such as Amazon, or financial services companies. | |||
The new architecture of trust created by the blockchain is defined as a “trustless trust”. Without trust it would fail since no engagement between individuals is possible without a form of trust, but if it relied on old trust structures it would not be a revolution and would fail its primary object. On the blockchain network, no agent is assumed to be trustworthy, but the output of the network is. Generally speaking, in every transaction, the counterpart, the intermediary and the dispute resolution mechanism must be trusted, but the blockchain substitutes these elements with code. There is no possibility to assess the other party’s trustworthiness, since all agents are represented by private\public keys in the network which allow for their anonymity; there is no central intermediary, since the platform is a distributed machine operated by all the participants; the disputes are solved through pieces of codes called smart contracts, that perform a certain action with no possibility to stop them. Transactions are verified through cryptographic proofs that other agents can verify mathematically. Therefore, it is not possible to frame this system within the common architectures: it is not a P2P since the other parties are unknown, there is no central authority and also there is no central intermediary since the platform is operated in a decentralized way. Each agent needs to trust the network and not each agent with whom he is engaging in a transaction. The blockchain (and Bitcoin) seems the perfect solution for the lack of trust in the modern society and for the problems that the previous architectures of trust presented. The fact that Bitcoin was born after the Great Financial Crisis is not random. P2P relationships were not sufficient in a world so deeply interconnected, intermediaries were considered the cause of the crisis itself and the Leviathan, i.e. the government, was not able to foresee the crisis and prevent it. | |||
Blockchain trust relies also on the immutability of the information recorded, through the mechanisms beforehand explained. However, immutability must be understood in a probabilistic way. The more blocks are added, the more the previous transactions will be immutable because it would require an infinite amount of power to alter the transactions. Each agent can decide after how long they trust the state of the ledger. Therefore, blockchain trust is not instantaneous. Moreover, the transparency of the ledger, meaning that the record of every transaction is publicly available and the software itself through which the blockchain operates is open source, is an important characteristic that increases trust. Finally, blockchain’s trust is algorithmic, meaning that it relies on algorithms to maintain the system: what must be trusted are not the people operating on it, but the software and the math behind the consensus process. | |||
Satoshi’s error was to believe that in his architecture trust was absent, while in reality it reduced the need of trusting some part of the system. Trust is needed and the blockchain could not function without it. Firstly, engaging in a transaction in a system without central control and with immutability means that no one is able to oversee the transaction and amend it if something is wrong. Agents can be confident that the transaction will be correctly registered, but a distributed ledger will not be able to verify if the content is legitimate and, if something is wrong with the transaction itself, there is no possibility to reverse it: smart contracts are unstoppable. Moreover, humans are not out of the system entirely, which means that errors and misunderstandings can occur. And the cryptographic techniques are still vulnerable to attacks: they may be difficult to perform, but users that engage with a blockchain need to trust that this will not happen. | |||
The author argues that the success of the blockchain as an architecture of trust will depend on its governance. The blockchain is a way to enforce some rules, but it is also a product of some rules designed by humans, which therefore would need a governance to continue to operate and to decide the next rules of the game. Moreover, the law should regulate the blockchain framework: without legal rules, the blockchain could be used as an instrument by criminals and terrorists (for recycling money for example), and this would reduce the trust that normal people put in this system. Crypto enthusiasts argue that the role of law would be replaced by smart contracts, but codes cannot fully formulate human intentions, which are an important part behind private contracts, and this could create misunderstandings between the parties. The law can intervene where smart contracts are not able to. Finally, also regulation can play an important role in developing the future of the blockchain and fostering its trustworthiness, as it does with other financial instruments and institutions. | |||
<span id="trust-and-blockchain-in-practice"></span> | |||
== Trust and blockchain in practice == | |||
Some scholars have started to think about how trust between users can be enhanced in real blockchain applications. You et al. (2022) find the main challenge to be the fact that there is no consensus about how to measure trust in the blockchain environment. Therefore, they develop a framework to do that, creating a system based on subjective ratings of trustworthiness. The authors start by identifying six different blockchain applications, considering which factors can be used to measure trust in each specific domain. Identifying the key factors behind trustworthiness is essential for creating a system to enhance trust. In particular: | |||
* Supply chain: it is possible to measure how trustworthy the supplier is by the average order arrival time and the defect rate, and how trustworthy is the buyer by the number of days for payment. | * Supply chain: it is possible to measure how trustworthy the supplier is by the average order arrival time and the defect rate, and how trustworthy is the buyer by the number of days for payment. | ||
* Healthcare industry: to assess the trustworthiness of these firms, regulatory compliance proof, claim approval rate | * Healthcare industry: to assess the trustworthiness of these firms, regulatory compliance proof, claim approval rate and drug prescription regularity can be the starting point. | ||
* E-commerce: to assess the trustworthiness of those firms, the accuracy of ratings provided by the users and the security of payments represent the most important features. | * E-commerce: to assess the trustworthiness of those firms, the accuracy of ratings provided by the users and the security of payments represent the most important features. | ||
* IoT devices: system security data and reliability of the data provided by these devices are the most important features. | * IoT devices: system security data and the reliability of the data provided by these devices are the most important features. | ||
* Finance: pivotal factors are the security of transactions and data and the | * Finance: pivotal factors are the security of transactions and data and the efficiency and quality of communications. | ||
* Social media: news and reputation credit represent the most important | * Social media: news and reputation credit represent the most important characteristics to assess trustworthiness. | ||
The problem of the blockchain is that, although the information recorded cannot be modified easily, the data may not always be true: the need for accountability arises because of this fact. | The problem of the blockchain is that, although the information recorded cannot be modified easily, the data may not always be true: the need for accountability arises because of this fact. | ||
The system presented by the authors is based on trust scores given by agents that | The system presented by the authors is based on trust scores given by agents that interact with other agents on the blockchain applications. Initially, there would be no score, since no transaction has occurred yet. Then, the two parties start to interact and they begin to collect trust factors about each other. The specific factors, which are described above, will depend on which application is under consideration. Then, each actor will give his score, which will be recorded on the blockchain and will be available for other users, who are now better informed regarding the other users of the blockchain application and can decide to interact with them or not. The validity of the scores will be ensured by the fact that each user will have followed the KYC validation procedures before interacting on the application and it will possible to identify the particular participant from the outside through verifiable credentials. Therefore, no rating will be anonymous. | ||
This system may increase the trust between users because they are incentivized to adhere to the common organizational norms of each sector, because otherwise they would damage their reputation by having a low score permanently recorded on the blockchain. Therefore, this model may create a set of incentives to align the two sides of each transaction. | This system may increase the trust between users because they are incentivized to adhere to the common organizational norms of each sector, because otherwise they would damage their reputation by having a low score permanently recorded on the blockchain. Therefore, this model may create a set of incentives to align the two sides of each transaction. | ||
= | <span id="Games"></span> | ||
= Trust games in the blockchain = | |||
As explained before, the blockchain system employs game theory and incentives to make the agents act honestly on the network. After the work of Satoshi Nakamoto, several papers were developed to study the incentive structures and the games behind the blockchain and its consensus mechanism. | |||
<span id="section"></span> | |||
== == | |||
Breiki (2022) studies how trust among players evolves over time when they perform trust evolution games. To do that, he defines the features of its abstract game. Firstly, the author identifies the parameters of the model: there are various miners and each of them has the possibility to cooperate (acting honestly) or defect (cheating); there is a vector of probabilities that defines the likelihood of each player to succeed in solving the puzzle, which is proportional to their computational power; there are the costs and rewards of mining, taking into account also the propagation delay (i.e. the time needed to validate a transaction); there is the market value. Moreover, the author uses two learning algorithms: fictitious play, where prior beliefs are defined; satisficing learning, where aspiration level of payoff and learning rates are defined. All in all, the author finds that the players learn to cooperate in the game to get a better payoff and, for satisfice players, lower learning rates increase the final payoffs. | |||
<span id="section-1"></span> | |||
== == | |||
J. Zhang and Wu (2021) study evolutionary game theory applied to the blockchain network to understand the strategies and incentives of the participants and their cooperative behavior. The authors explain that the blockchain is a perfect environment for evolutionary game theory because: | |||
* There is information symmetry, since all individuals have and share the same information on the network and each participant has complete transactions data. | |||
* There is information symmetry, since all | |||
* All the participants are equal so no party has a dominant advantage when the game begins. | * All the participants are equal so no party has a dominant advantage when the game begins. | ||
* Participants are prone to trust each other and engage in the game because of the cryptographic mechanisms, which make the environment credible and immutable. | * Participants are prone to trust each other and engage in the game because of the cryptographic mechanisms, which make the environment credible and immutable. | ||
* The process of adding new blocks can be seen as a form of repeated games. | * The process of adding new blocks can be seen as a form of repeated games. | ||
Agents have bounded rationality, since they cannot get global information because the network is complex, and therefore they are not fully able to maximize their payoffs. Each participant can have two possible behaviors, cooperation or defection, and they update their strategy considering the maximum payoff. Indeed, during the generation of new blocks, each agent is able to learn from | Agents have bounded rationality, since they cannot get global information because the network is complex, and therefore they are not fully able to maximize their payoffs. Each participant can have two possible behaviors, cooperation or defection, and they update their strategy considering the maximum payoff. Indeed, during the generation of new blocks, each agent is able to learn from his actions and the actions of the winners. | ||
The model developed comprises two groups of miners: group A, with the inclination for cooperation, and group B, inclined to cheating. Participating in | The model developed comprises two groups of miners: group A, with the inclination for cooperation, and group B, inclined to cheating. Participating in each group has a cost <math display="inline">C_a</math> and <math display="inline">C_b</math> and each game brings a revenue R, which will be rewarded to the participants. Each group will have different benefits (for group A, transactions fees and mining rewards, for group B illegal revenue). Finally, there are also punitive measures, denominated P. | ||
Each player, in both groups, can decide which strategy to adopt. Their | Each player, in both groups, can decide which strategy to adopt. Their payoffs are: <math display="block">\begin{aligned} | ||
\begin{ | |||
E_{h,a} &= y(K_a + \lambda R - C_a) + (1-y)(K_a - C_{a}) \\ | E_{h,a} &= y(K_a + \lambda R - C_a) + (1-y)(K_a - C_{a}) \\ | ||
E_{m,a} &= y(K_a - C_a - P) + (1-y)(K_a - P) \\ | E_{m,a} &= y(K_a - C_a - P) + (1-y)(K_a - P) \\ | ||
E_{h,b} &= x(K_b + (1-\lambda)R - C_{b}) + (1-x)(K_b - C_b - P) \\ | E_{h,b} &= x(K_b + (1-\lambda)R - C_{b}) + (1-x)(K_b - C_b - P) \\ | ||
E_{m,b} &= x(K_b - C_{b}) + (1-x)(K_b - P) | E_{m,b} &= x(K_b - C_{b}) + (1-x)(K_b - P) | ||
\end{ | \end{aligned}</math> Where y and x are the probabilities of winning the game, ''h'' means the honest strategy, ''m'' means the dishonest strategy and <math display="inline">\lambda</math> is the portion of R won. | ||
The authors assume that, at the beginning, each participant has a decided strategy to start with. The goal is to examine the change in the population of honest and dishonest agents after several rounds, taking also into account changes in the parameters of the game. The authors argue that, unlike classic evolution games, the relationship among agents in the blockchain is random. Moreover, the size of the network matters: with small networks, the emergence of cooperative behaviors is easier. Finally, a definition of the evolutionary stable strategy (ESS) is given. The ESS is ''“a strategy that other strategies cannot invade”'' (J. Zhang & Wu, 2021, p. 5). | |||
The authors run various simulation. Firstly, 67% of group A are honest agents and 20% of group B are betrayers. Here, the honest strategy is an ESS, since the number of betrayers tends to 0 as the rounds increase. However, as the expected payoff of group B augments, the honest strategy is still an ESS but weak, because higher payoffs with the dishonest strategy tend to tempt agents to cheat. Therefore, future revenue expectations influence the behavior of participants in the blockchain. Then, the influence of the network structure is analyzed within the group A population. They use a Watts-Strogatz (WS) small-world model and a Barabasi-Albert (BA) scale-free network model. The authors find that it takes quite a lots of rounds for honest agents to establish a trusting cooperative relationship and this may represent an opportunity for cheating agents in the blockchain. Therefore, security is a relevant topic especially in the initial stage of the blockchain. | |||
<span id="section-2"></span> | |||
== == | |||
L. Zhang and Tian (2023) develop a Byzantine consensus protocol (i.e. a consensus protocol where there are faulty and malicious agents) on the blockchain, shaping it as a dynamic game. Their main contribution to the current literature relies on the fact that the agents in their model have bounded rationality and can learn from the historical observations. In particular, this means that the participants can choose among a limited set of strategies (honest or dishonest), they are able to learn from historical observations and they choose their strategy accordingly, taking also into account the current state of information, but they are not able to forecast the future. Moreover, they are allowed to have inconsistent subjective beliefs about the probability of meeting agents with their same strategy: each agent believes that, for a portion ''m'' of rounds, he will meet a proposer with the same strategy, ranging from ''m''=1 (meaning that he and the proposer will always have the same strategy in every round) to ''m''=0 (meaning that he and the proposer will have the same strategy only by chance). Their model, based on a BFT (Byzantine Fault Tolerance) consensus protocol, consists of the following features: | |||
The | * The agents, before the game, are selected to form different parallel committees, which compete in a ''n'' round mining game and which will not change until the end of the game. | ||
* Each agent has one vote in the mining game. | |||
* In each round, an agent is randomly selected to make a proposal about the validity of a block. The other agents become validators and they vote if the block in the proposal is valid. The block will be validated if the number of votes is higher than ''v'', a majority threshold. | |||
* There is a reward R for validating the block, a cost <math display="inline">c_{check}</math> for verifying a transaction, a cost <math display="inline">c_{sent}</math> for voting for a transaction and a penalty ''k'' that validators encounter if they misbehave. | |||
= | Before each round, each validator checks if their congeners (the other nodes in the network) have pivotality, i.e. the ability to control the consensus outcome because they have the majority. As specified before, the participants have two strategies: the honest strategy, where miners achieve the consensus protocol and the Byzantine (dishonest) strategy, where miners damage the consensus protocol. The authors assume that participants are fixed and no one would quit. Initially, a number <math display="inline">x_1</math> of miners choose the honest strategy in the first turn. | ||
The authors specify the concept of stable equilibrium, which can be defined as the situation when <math display="inline">x_t=x_{t-1}</math>, so when the portion of miners which perform an honest strategy remains stable (so the number of agents that changes their strategy from honest to dishonest is equal to the number of agents that changes from dishonest to honest). There are 3 possible stable equilibria: | |||
* The honest stable equilibrium, where no agent is cheating, so <math display="inline">x_t=x_{t-1}=1</math>. | |||
* The agents | * The Byzantine stable equilibrium, where all agents are cheating, so <math display="inline">x_t=x_{t-1}=0</math>. | ||
* The pooling stable equilibrium, where both strategies exist, so <math display="inline">x_t=x_{t-1} \in (0,1)</math>. | |||
* | |||
Each equilibrium can be reached depending on the number of initial cheater/honest miners, their belief ''m'', the cost-reward mechanism and the pivotality rate (i.e. the minimum percentage of nodes that must agree to reach consensus and add a block). They find that only the honest stable equilibrium can support the safety, the liveness (so the fact that all non faulty agents should have output) and the validity (the fact that all participants have the same valid output) of the blockchain. Moreover, they find that if the reward-punishment increases, the blockchain will become safer and the honest stable equilibrium will be easier to achieve, while if the cost-punishment ratio increases, the safety and the liveness of the ledger are threatened and the honest equilibrium is more difficult to achieve. Finally, if the pivotality rate increases, every stable equilibrium is harder to achieve. | |||
<span id="algorithms"></span> | |||
= Trust in algorithms = | |||
Algorithms are becoming more and more important in everyday life, from health care to criminal justice systems, contributing to decision making processes in many fields. Therefore, the natural question of whether it is possible to trust algorithms arises. | |||
According to Spiegelhalter (2020), the trustworthiness of an algorithm comes from the claims made about the system, including how its developers describe how the system works and what it can do, as well as the claims made by the system itself, which refer to the algorithm’s responses and output regarding specific cases. Therefore, he proposes a model to assess and boost trustworthiness in algorithms. | |||
Regarding the first kind of claims, developers should clearly state what the benefits and drawbacks of using their algorithms are. To assess that, the author proposes an evaluation structure of 4 phases: | |||
* Digital testing: the algorithm accuracy should be tested on digital datasets. | * Digital testing: the algorithm accuracy should be tested on digital datasets. | ||
* Laboratory testing: the | * Laboratory testing: the algorithm’s results should be compared with human experts in their field. An independent committee should evaluate which response is better. | ||
* Field testing: the system should be tested on field, to decide whether | * Field testing: the system should be tested on field, to decide whether it does more harm or good, considering also the effects that it can have on the overall population. | ||
* Routine use: if the | * Routine use: if the algorithm passes the 3 previous phases, it should be monitored continuously, in order to solve problems that can eventually arise. | ||
Having explicit positive evidence in all these phases would boost the trustworthiness of the claims made about the system by developers. | Having explicit positive evidence in all these phases would boost the trustworthiness of the claims made about the system by developers. | ||
= | Considering the second type of claims, to reach a higher degree of trustworthiness, it is necessary that the algorithm specifies the chain of reasoning behind its claims, which are the most important factors that led to its output and which is the uncertainty around the claim. Moreover, also a counterfactual analysis should be performed (i.e. what would be the output if the input changed). Overall, algorithms should be made clearer and more explainable and transparency can play an important role in that. To increase trustworthiness, an algorithm should be accessible and intelligible by people, it should be useable, so have an effective utility, and it should be assessable, so the process behind every claim should be available. Ultimately, it should show how it works. And more importantly, it should also clearly state its own limitations, so that trustworthiness does not become blind trust. | ||
<span id="conclusion"></span> | |||
= Conclusion = | |||
Trust plays a pivotal role in ensuring the existence and development of modern societies. This paper provides a comprehensive summary of the current literature on how trust relates to the world of economics and finance. To begin with, the concept of trust is clearly defined and differentiated from other human sentiments such as cooperation and confidence. The notion of risk is also discussed, as a complete assessment of the other party’s actions is antithetical to a trust relationship. Furthermore, various methods for measuring trust are explored, with trust games being the most commonly used tool by researchers. The paper goes on to explain how trust is a source of comparative advantage, which determines trade patterns. Additionally, trust is linked to stock market participation, with more trusting individuals being more likely to invest in risky assets and, conditional on participating, they allocate a larger portion of their wealth. While complete trust among individuals would potentially be beneficial, it is not possible in the real world. However, money can act as a substitute that replicates the allocations of a trustworthy economy. The paper also emphasizes the importance of trusting institutions, especially in a world where trust is lacking: effective communication among individuals is critical in assessing the true trustworthiness of an institution. The paper then delves into the relationship between trust and blockchain technology, which can be seen as a new architecture of trust. Moreover, various authors have developed blockchain trust games to better understand the best consensus mechanism process. Finally, the importance of trusting algorithms is highlighted, given their widespread use in everyday technology, healthcare, and the justice system. | |||
=References= | =References= | ||
* Alós-Ferrer, C., & Farolfi, F. (2019). Trust games and beyond [Accessed on April 19, 2023]. ''Frontiers in Neuroscience'', 13. [https://doi.org/10.3389/fnins.2019.00887] | |||
* | * Bajos, N., Spire, A., Silberzan, L., Sireyjol, A., Jusot, F., Meyer, L., Franck, J.-E., Warszawski, J., Bagein, G., Counil, E., Jusot, F., Lydie, N., Martin, C., Meyer, L., Raynaud, P., Rouquette, A., ... Spire, A. (2022). When lack of trust in the government and in scientists reinforces social inequalities in vaccination against covid-19 [Accessed on May 7, 2023]. ''Frontiers in Public Health'', 10. [https://doi.org/10.3389/fpubh.2022.908152] | ||
* Bajos, N., Spire, A., Silberzan, L., Sireyjol, A., Jusot, F., Meyer, L., Franck, J.-E | * Berg, J., Dickhaut, J., & McCabe, K. (1995). Trust, reciprocity, and social history. ''Games and Economic Behavior'', 10(1), 122–142. | ||
* Berg, J., Dickhaut, J., & McCabe, K. (1995). Trust, reciprocity, and social history. Games and Economic Behavior, 10(1), 122–142. | * Bohnet, I., & Zeckhauser, R. (2004). Trust, risk and betrayal [Trust and Trustworthiness]. ''Journal of Economic Behavior & Organization'', 55(4), 467–484. | ||
* Bohnet, I., & Zeckhauser, R. (2004). Trust, risk and betrayal [Trust and | * Breiki, H. (2022). Trust evolution game in blockchain [Accessed on April 20, 2023]. ''2022 IEEE/ACS 19th International Conference on Computer Systems and Applications (AICCSA)'', 1–4. [https://doi.org/10.1109/AICCSA56895.2022.10017651] | ||
* Burnham, T., McCabe, K., & Smith, V. L. (2000). Friend-or-foe intentionality priming in an extensive form trust game. ''Journal of Economic Behavior & Organization'', 43(1), 57–73. | |||
* Burnham, T., McCabe, K., & Smith, V. L. (2000). Friend-or-foe intentionality priming | * Cingano, F., & Pinotti, P. (2016). Trust, firm organization, and the pattern of comparative advantage. ''Journal of International Economics'', 100, 1–13. | ||
in an extensive form trust game. Journal of Economic Behavior Organization, | * Edelman. (2022). Edelman trust barometer 2022 [Accessed on April 22, 2023]. [ https: //www.edelman.com/sites/g/files/aatuss191/files/2022-01/2022%20Edelman% 20Trust%20Barometer%20FINAL_Jan25.pdf] | ||
43(1), 57–73. | * Fehr, E., Fischbacher, U., Rosenbladt, B. v., Schupp, J., & Wagner, G. G. (2003). A nation-wide laboratory examining trust and trustworthiness by integrating behavioral experiments into representative surveys [Accessed on May 2, 2023]. ''Working paper / Institute for Empirical Research in Economics'', 141. [https://doi.org/10.3929/ethz-a-004465776] | ||
* Cingano, F., & Pinotti, P. (2016). Trust, firm organization, and the pattern of | * Gale, D. (1978). The core of a monetary economy without trust. ''Journal of Economic Theory'', 19(2), 456–491. | ||
20Edelman | |||
* Fehr, E., Fischbacher, U., Rosenbladt, B., Schupp, J., & Wagner, G. ( | |||
* Gale, D. (1978). The core of a monetary economy without trust. Journal of Economic Theory, 19(2), 456–491. | |||
* Gambetta, D. (2000). Can we trust trust? Trust: Making and Breaking Cooperative Relations, electronic edition, Department of Sociology, University of Oxford, 213–237. | * Gambetta, D. (2000). Can we trust trust? Trust: Making and Breaking Cooperative Relations, electronic edition, Department of Sociology, University of Oxford, 213–237. | ||
Glaeser, E. L., Laibson, D. I., Scheinkman, J. A., & Soutter, C. L. (2000). Measuring Trust*. The Quarterly Journal of Economics, 115 (3), 811–846. | * Glaeser, E. L., Laibson, D. I., Scheinkman, J. A., & Soutter, C. L. (2000). Measuring Trust*. ''The Quarterly Journal of Economics'', 115(3), 811–846. | ||
* Grimes, A. (1990). Bargaining, trust and the role of money. The Scandinavian Journal of Economics, 92(4), 605–612. | * Grimes, A. (1990). Bargaining, trust and the role of money. ''The Scandinavian Journal of Economics'', 92(4), 605–612. | ||
* Guiso, L., Sapienza, P., & Zingales, L. (2008). Trusting the stock market. Journal of Finance, 63, 2557–2600. | * Guiso, L., Sapienza, P., & Zingales, L. (2008). Trusting the stock market. ''Journal of Finance'', 63, 2557–2600. | ||
* Houser, D., Schunk, D., & Winter, J. (2010). Distinguishing trust from risk: An anatomy of the investment game. Journal of Economic Behavior Organization, 74(1), 72–81. | * Houser, D., Schunk, D., & Winter, J. (2010). Distinguishing trust from risk: An anatomy of the investment game. ''Journal of Economic Behavior & Organization'', 74(1), 72–81. | ||
Kosfeld, M., Heinrichs, M., Zak, P. J., Fischbacher, U., & Fehr, E. (2005). Oxytocin increases trust in humans. Nature, 435 (7042), 673–676. | * Kosfeld, M., Heinrichs, M., Zak, P. J., Fischbacher, U., & Fehr, E. (2005). Oxytocin increases trust in humans. ''Nature'', 435(7042), 673–676. | ||
* Lenton, P., & Mosley, P. (2011). Incentivising trust. Journal of Economic Psychology, 32(5), 890–897. | * Lenton, P., & Mosley, P. (2011). Incentivising trust. ''Journal of Economic Psychology'', 32(5), 890–897. | ||
* Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. The Academy of Management Review, 20 (3), 709–734. | * Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. ''The Academy of Management Review'', 20(3), 709–734. | ||
* Meylahn, B. V., den Boer, A. V., & Mandjes, M. (2023). Trusting: Alone and together. | * Meylahn, B. V., den Boer, A. V., & Mandjes, M. (2023). Trusting: Alone and together [Accessed on May 9, 2023]. [https://arxiv.org/abs/2303.01921] | ||
* Nakamoto, S. (2009). Bitcoin: A peer-to-peer electronic cash system. Cryptography Mailing list at https://metzdowd.com. | * Nakamoto, S. (2009). Bitcoin: A peer-to-peer electronic cash system [Accessed on April 19, 2023]. Cryptography Mailing list at [https://metzdowd.com]. | ||
* Spiegelhalter, D. (2020). Should We Trust Algorithms? Harvard Data Science Review, 2 (1). | * Spiegelhalter, D. (2020). Should We Trust Algorithms? [Accessed on April 30, 2023]. ''Harvard Data Science Review'', 2(1). [https://hdsr.mitpress.mit.edu/pub/56lnenzj] | ||
* Warren, M. (2018). Trust and Democracy. In The Oxford Handbook of Social and Political Trust. Oxford University Press. | * Warren, M. (2018). Trust and Democracy. In ''The Oxford Handbook of Social and Political Trust''. Oxford: Oxford University Press. | ||
* Werbach, K. (2018). The Blockchain and the New Architecture of Trust. The MIT Press. | * Werbach, K. (2018). The Blockchain and the New Architecture of Trust. Cambridge: The MIT Press. | ||
* You, S., Radivojevic, K., Nabrzyski, J., & Brenner, P. (2022). Trust in the context of blockchain applications. 2022 Fourth International Conference on Blockchain Computing and Applications (BCCA), 111–118. | * You, S., Radivojevic, K., Nabrzyski, J., & Brenner, P. (2022). Trust in the context of blockchain applications. ''2022 Fourth International Conference on Blockchain Computing and Applications (BCCA)'', 111–118. | ||
* Zak, P. J., Kurzban, R., & Matzner, W. T. (2005). Oxytocin is associated with human trustworthiness. Hormones and Behavior, 48(5), 522–527. | * Zak, P. J., Kurzban, R., & Matzner, W. T. (2005). Oxytocin is associated with human trustworthiness. ''Hormones and Behavior'', 48(5), 522–527. | ||
* Zhang, J., & Wu, M. (2021). Cooperation mechanism in blockchain by evolutionary game theory. Complexity, vol. 2021. | * Zhang, J., & Wu, M. (2021). Cooperation mechanism in blockchain by evolutionary game theory [Accessed on April 19, 2023]. ''Complexity'', vol. 2021. [https://doi.org/10.1155/2021/1258730] | ||
* Zhang, L., & Tian, X. (2023). On blockchain we cooperate: An evolutionary game perspective. | * Zhang, L., & Tian, X. (2023). On blockchain we cooperate: An evolutionary game perspective [Accessed on April 28, 2023]. [https://arxiv.org/abs/2212.05357] |
Latest revision as of 15:31, 9 June 2023
Contribution of SIMONE GOZZINI
Introduction
Trust is a fundamental sentiment, the binding force behind modern societies: without it, no progress would have been possible. Trust is what permitted the birth of modern finance with the Buttonwood Agreement of 1792, it is what makes people able to rely on another person or organization without the continuous need to assess what the other party is doing and it is ultimately what permits the existence of modern democracies (Warren, 2018) . Without it, no meaningful relationship would be possible. However, in the last few years, a general disbelief about trust is permeating the civil society. Edelman is a global communication firm that conducts a comprehensive survey about trust every year. For 2022, the results picture a general sentiment of distrust across all segments of the population: 60% of the interviewed say that their default tendency is to distrust others, media are seen as a divisive and untrustworthy institution by around 50% of the people and trust in government is significantly dropping, year after year. The problem is particularly accentuated regarding governments, which are seen as unable to fix societies’ problems (Edelman, 2022). Distrust affects modern societies as a whole, impacting not only social relationships and the economy, but also human health: for example, lower trust in government has led to lower vaccinations against COVID-19, threatening society as a whole (Bajos et al., 2022).
This paper highlights the importance of trust in modern economies and in the financial world. Section 2 describes the concept of trust, differentiating it from other human sentiments like cooperation and confidence. In general, the concept of risk is concerned, given that trust involves a sort of faith in someone or something. Section 3 describes the various methodologies used in the literature to measure trust: trust games, surveys and the frontier of neuroscience. Section 4 presents trust as a source of comparative advantage in world trade patterns: societies with more trust have bigger and more productive firms. Section 5 studies how trust affects stock market participation: people with a higher tendency to trust are more likely to participate in the stock market and, conditional on participating, they invest a higher fraction of their wealth. Section 6 describes a general equilibrium model where money is seen as a substitute of trust: the allocation of resources in a trustworthy society can be reached also in a trust-less society which employs money. Section 7 describes a stylized model of trust between individuals and an institution: the exchange of information among individuals is found to be a tool that improves the assessment of the true trustworthiness of an institution. Section 8 presents the blockchain technology as a new architecture of trust, describing also how trust can be enhanced to reach a higher diffusion and application of this technology. Section 9 presents various papers regarding trust games in the blockchain technology, considering in particular how to reach and improve the consensus process. Section 10 describes how algorithms, which are becoming more and more important in modern life, can be trusted: in particular, the author highlights transparency and accessibility as fundamental characteristics to enhance trust. Section 11 concludes.
The concept of Trust
According to the definition of Gambetta (2000), trust is “a particular level of the subjective probability with which an agent assesses that another agent or group of agents will perform a particular action, both before he can monitor such action (or independently of his capacity ever to be able to monitor it) and in a context in which it affects his own action” (Gambetta, 2000, p. 5).
This definition highlights important concepts:
- Trust is a probability p, a threshold, but subjective: people engage in a trust relationship if they believe that the probability that the person will perform the particular action mentioned in the definition is higher than a certain level, which depends on the individual predisposition to trust and the circumstances under which the relationship is being created, like the cost of misplacing trust.
- Trust is related to uncertainty: the underlying assumption is that the agent is not able to fully monitor the other agent while he performs the particular action (this is usually the case in practice), otherwise trust would not be necessary since the first could keep track of the second while performing the action.
- It has an impact on the trustor, otherwise the actions of the second agent would not matter to the first, and engaging in a relationship would not be necessary.
Trust is relevant when the other agents (trustees) are free to betray the trustor, otherwise, if coercion intervenes, the outcome of the trust relationship is known ex-ante. Assessing a probability would not be needed anymore and uncertainty would not be involved: the resulting interaction would not be a trust relationship, given that it lacks of its fundamental characteristics. Furthermore, also the trustor needs to be free to choose whether to engage in this relationship or to escape from it, otherwise also in this case assessing p would not be needed since he would have no choice.
Therefore, trust is fundamentally a free choice between two individuals who seek mutual benefits and it involves a level of risk for the trustor, given that it affects his personal sphere of action.
Cooperation and trust
Trust, however, must not be confused with other human actions, sentiments, or beliefs. Trust is different from friendship, from passion, from loyalty and from cooperation. In particular, the relationship between the latter and trust is investigated extensively by the author.
Trust can be seen as:
- a precondition for cooperation, which, together with sane competition is beneficial to foster human progress. However, although being probably the most efficient way to achieve cooperation, it is not a necessary condition: people have used surrogates in history to overcome the problem of the lack of trust, like coercion, contracts and promises. All of them have the objective of diminishing the possible alternatives that the trustor and the trustee can face, thus reducing the risk for both parties in engaging in this relationship. A higher level of trust increases the probability of cooperating, but it is possible that, even though the level of p is low, the result is cooperation anyway. This is because an agent takes also into consideration the cost and the benefit of engaging (or not) in such a relationship, the other alternatives he has and the specific situation.
- an outcome of cooperation. Societies may engage in cooperation thanks to “a set of fortunate practices” (Gambetta, 2000, p. 10), particular circumstances and the need to satisfy mutual interests, for which the cost of not engaging in cooperation is higher than the risk of engaging. Trust is therefore the outcome of these practices and there is no need for prior beliefs about the trustworthiness of the other party, since trust will arise only after the beginning of the relationship, when information is collected. This statement is reinforced by the fact that cooperation exists in animals, that are unlikely to experience trust.
However, according to the author, there is no reason for saying that cooperation is a spontaneous equilibrium in human interaction: cooperation is just as likely as non cooperation. A predisposition to trust may be rational for humans in order to achieve their objectives, since trust is fundamentally an efficient way to achieve cooperation, but it is not necessary to wait for trust to evolve in order to initiate cooperation. Common interests and constraints can be enough and they can be beneficial especially in underdeveloped countries which present a low level of trust. In fact, although trust and trustworthiness can be advantageous for an individual’s purposes, they cannot be artificially induced in a rational person. Moreover, the author argues that rational trustors and trustees may seek and present evidence to trust and be trustworthy. However, more information cannot fully solve the problem of trust. People, once they trust, do not try to find evidence to corroborate their belief, but rather they change their mind only if they find contrary evidence, which is not easy.
The concept of trust in organizations
Mayer et al. (1995) start from the studies of Gambetta to further develop the understanding of trust in the context of organizations. In particular, they focus on the trust relationship between two individuals: a trustor who trusts or not an individual to perform a particular action and the trustee who receives the trustor’s trust, deciding then if fulfilling or not that action. The flow of trust is unidirectional: mutual trust between two parties is not developed in the paper, nor is trust in a social system. In particular, according to the authors, the concept of vulnerability is what is missing in the definition of Gambetta, given that “Trust is not taking risk per se, but rather it is a willingness to take risk.” (Mayer et al., 1995, p. 712). Then, trust is differentiated from different constructs such as:
- Cooperation, which is intensively studied also by Gambetta. The authors highlight the fact that trust is not a conditio sine qua non for cooperation, since it is possible to cooperate with someone not trustworthy (for example when there are external controls and constraints like those discussed in the previous section).
- Confidence: the main difference relies on the fact that, with trust, risk must be assumed, while in the second it is not necessary. Moreover, when a person chooses to trust, he will consider a set of possible alternatives, while that is not the case with confidence.
- Predictability: trust and predictability are a way to cope with uncertainty but, if a person is predictable, it does not necessarily mean that it is worth putting trust in him. This is because it is possible to predict that the other person will consistently behave in negative ways (and the uncertainty is reduced), but no rational individual would put trust in him.
Then, the characteristics of the trustor and the trustee are analyzed, which can together initiate a trust relationship between the two agents. The most important feature of the trustor is his propensity to trust another person, which is a personal trait constant over time and across situations. It is a general willingness to trust and it is not related to another party, since it is measurable before any interaction with the other agent. However, each trustor has different levels of trust for various trustees, which arise after the relationship is initiated. Therefore, they depend on the characteristics and actions of the trustee, i.e. his trustworthiness. According to the authors, there are 3 main characteristics of the trustee that are able to explain trustworthiness:
- Ability: it is defined as the set of skills and competencies of the trustee over a specific domain. It is possible to trust another person to perform a particular action if the other agent is competent in that field otherwise he should not be trusted, even though he may be committed to completing the task. Therefore, trust should not be intended in absolute terms, but over a specific field of knowledge.
- Benevolence: it is a personal trait of the trustee towards the trustor which is related to how much the former wants good for the latter. More benevolence leads to higher trust because the trustor can be more sure that the trustee will perform the action taking into account also his benefit and not only the trustee’s egoistic motives.
- Integrity: it is defined as “the trustor’s perception that the trustee adheres to a set of principles that the trustor finds acceptable” (Mayer et al., 1995, p. 719). A trustee’s integrity therefore depends on what the trustor’s set of beliefs are. If the trustor thinks that the integrity of the trustee is not sufficient, he will not engage in a trust relationship with him.
In particular, integrity will be central in the early stages of the relationship, before gaining any insights; then, benevolence will become important over time, as the trustor retrieves information during the course of the relationship; ability, instead, continues to stay important from the beginning to the end. After engaging in the trust relationship, the trustor will be able to gain new data and information, through which he can update his beliefs about these three characteristics of the trustor, eventually deciding whether the placement of trust is still reasonable. While a trustor tries to assess these characteristics of the trustee, the role of context becomes important because it affects ability (for example because a change in a situation may change the skills needed to complete a certain task), the level of benevolence (for example if the trustee changes his behavior during time) and integrity (for example because a certain action of the trustee is not interpreted as coherent with the trustor’s set of values only because it was obliged to do so by the specific situation).
Finally, the authors deal with the risk involved in trusting. In particular, they highlight the fact that there is no risk taking in the propensity to trust, but risk arises only when an agent effectively engages in a trust relationship. However, the form and the level of risk assumed by the trustor will depend on the level of trust involved in a relationship: the more the trustor trusts the trustee, the more risk he will be willing to take. So, before initiating a trust relationship, the agent has to assess whether the level of trust is higher or lower than the perceived level of risk, so that he can decide whether it makes sense to engage in such a relationship.
How to measure trust
Trust is therefore a fundamental device in human society and it is also important in economics and finance, as this paper will later explain. A natural question arises: how it is possible to measure trust? The question is not easy to answer, since trust is a human sentiment, therefore subjective and emotional, and which is also interwoven with other human sentiments and beliefs. review the major methods used by the literature to measure trust, highlighting the main limitations of each model.
Trust Games and Game Theory
Experimental economics has intensively relied on game theory to quantify trust. The games mostly used nowadays are various versions of the TRUST GAME, which was invented by Berg et al. (1995). Alós-Ferrer and Farolfi (2019) describe it as follows: “A first agent, called the trustor, is given a monetary endowment X, and can choose which fraction p of it (zero being an option) will be sent to the second agent, called the trustee. The transfer p · X is then gone, and there is nothing the trustor can do to ensure a return of any kind. Before the transfer arrives into the trustee’s hands, the transfer is magnified by a factor K 1. The trustee is free to keep the whole amount without repercussion. Crucially, however, the trustee has the option to send a fraction q of the received transfer back to the trustor, hence honoring the trustor’s initial sacrifice” (Alós- Ferrer & Farolfi, 2019, p. 1). The transfer of the trustor can become a measure of trust, while the subsequent transfer of the trustee is a measure of trustworthiness. These games underline some important features of trust as described by Gambetta (2000): trustor and trustee decisions are free and voluntary, uncertainty and risk are involved and there are possible repercussions for the trustor (a loss in utility).
However, despite the popularity of this method, there are various limitations that need to be addressed. In the agents’ behavior, there might be possible motivational confounds that affect the measurement of trust and trustworthiness, like selfish or altruistic tendencies, efficiency reasons, or prior personal preferences (like inequity aversion). To address this problem, the authors suggest taking as a measure the difference between the transfers in the trust game and those in a game called the Dictator Game (i.e. a game where the proposer’s decisions are implemented without the possibility for the responder to do something).
Then, the question of the risk attitudes of the agents is addressed. Trust involves a certain risk (given that the trustor cannot monitor the response of the trustee) so the aptitudes towards risk may affect the monetary transfers. The evidence is mixed, with early studies (like Houser et al., 2010) finding no relationship between risk attitudes and trust and with others finding a correlation. The lack of agreement might be due to the concept of risk involved in the trust games themselves, which is not a pure financial risk, but a betrayal aversion, i.e. the risk and fear of being betrayed by another human being. Taking into account this, the authors mention the study of Bohnet and Zeckhauser (2004), where a “betrayal aversion” was found in the decisions of the agents, different from the standard risk aversion. Therefore, to disentangle the trust component and the risk component of the agents’ transfers, standard measures of risk might not fit properly. The authors, however, criticize also the use of game variants to address this issue, since new measures may capture other undesired effects.
Another problem can arise when there are changes in the parameters, implementation and description of the trust game: the responses of the agents might not be consistent in all contexts, thus creating an impossibility of comparability between different experiments. For instance, increasing the multiplier K will likely increase the trustor’s transfer and also the fraction returned by the trustee according to Lenton and Mosley (2011). Moreover, also the way that the game is framed can have an impact: Burnham et al. (2000) show that the responses of the agents involved depend on whether, in the instructions of the game, the other agent was called partner or opponent. In the former case, the trustor trusted more the trustee than in the latter case. However, if the game is not framed at all the participants might create their own frame, thus interpreting the play in different and unpredictable ways, conducting to biased results.
Surveys
Another possible measure of trust relies on the use of surveys. The most important example is the General Social Survey (GSS) of the U.S. National Opinion Research Center. The question asked is: “Generally speaking, would you say that most people can be trusted or that you can’t be too careful in dealing with people?”. The possible answers are: “Most people can be trusted” or “Can’t be too careful” or “I don’t know”. This question is used also in other important surveys, like the EVS (European Values Survey), the WVS (World Values Survey), the BHPS (British household panel study) and the ANES (American National Election Studies).
This method is not immune from problems. For example, the interpretation of each individual might play a role in the response, as seen in the Trust Game. Moreover, the relationship between these two methods should be taken into account. Ideally, if both were valid and consistent, the responses should be highly correlated. However, the evidence is mixed. Glaeser et al. (2000) find no correlation between the two measures while Fehr et al. (2003) find evidence of the contrary. An explanation could be that surveys test a general propensity to trust, while Trust Games measure a specific strategic situation of the agents’ behavior. The concept of trust is therefore not uniquely determined and different methodologies might capture different aspects of this complex human attitude.
Moreover, the authors suggest that, if surveys are used as a measure, one must take into account various controls (like culture, geography and age) to interpret and therefore compare the responses.
Neuroscience
The new frontier in the measurement of trust is represented by neuroscience, which tries to give more objective and biological methods.
Firstly, the relationship between oxytocin (OT) and trust is investigated, in particular to link OT levels with the behavior in the Trust Game. Zak et al. (2005) find that OT levels can predict trustee trustworthiness but not trustors’ transfers. However, when the change in OT levels is endogenous (i.e. natural, like in the paper mentioned above), the studies cannot establish causality. Hence, another set of studies, where the level of OT was exogenously determined, is examined. Kosfeld et al. (2005) find that the treatment group in their experiment (i.e. the people whom OT was administered) presents larger trustors’ transfers compared to the control group, but no significant differences in the trustees’ transfers. Moreover, their results suggest that OT causally increases trust through a reduction of betrayal aversion and it does not increase risk-taking behavior or prosocial aptitudes in general. The two methods of investigation lead therefore to inconsistent result with each other. Therefore, no conclusion can be reached: the relationship of OT with trust and trustworthiness is not simple as previously thought.
Finally, the authors introduce the latest studies about the use of brain imaging to understand where trust comes from and how it forms. This might be useful to develop more reliable measures of trust in the future.
Trust as a source of comparative advantage
Cingano and Pinotti (2016) study the effect of trust on firm organization and on comparative advantage. The authors argue that interpersonal trust means more delegation of decisions within a firm, resulting in a larger firm size and in the expansion of more productive units. If trust is established, it is possible to expand the firm outside familiar and friendly relationships, thus using the firm’s own productivity advantage over a larger amount of input, given that the firm is bigger and has more factors of production. The principal-agent problem (that comes with delegation and prevents a higher level of it) can be partially solved by this human device. In particular, higher delegation causes higher productivity through:
- higher exploitation of the informational advantage of the managers and of specific skills of some workers.
- the reduction of information costs.
- more resiliency and ability to cope with changes in profit and growth opportunities.
Studying a sample of Italian and European companies, the authors find that trust, together with human capital and intangible intensity, is associated with greater delegation, which, in turn, is associated with larger firm size. Their findings suggest that high-trust countries present a higher value-added per worker and higher exports in industries where delegation is needed, thus making trust a source of comparative advantage in trade patterns. This effect is the result of a reduction of smaller size firms towards bigger size firms.
The authors test their hypotheses through empirical data obtained with surveys. They retrieve data from:
- The INVIND survey from the Bank of Italy, which provides information about inputs, outputs, internal organization and governance of a sample of more than 6500 firms. These data are used to test trust differences across Italian regions.
- The World Values Survey (WVS) and the European Social Survey (ESS) to measure interpersonal trust and delegation.
- The OECD Structural Analysis Database (STAN) and the OECD Business Demographic Statistics, which provide information about value added per worker, organization and the number of workers of European firms.
The analysis starts with the following regression:
Where is industry specialization (measured through value added per worker or exports), is the average level of trust, is a measure of the need for delegation in each industry and , and are controls respectively for other determinants of specialization and geographical factors.
Then, the authors estimate through the following regression:
Where is the number of responsibility centers (which is a measure of delegation inside firms), is the log of the number of workers (which is kept fixed) and and are firm’s controls.
In particular, the analysis shows that, for the Italian sample, higher trust leads to an increase in the production of delegation intensive industries. Starting with the log of value added per worker as the dependent variable, the authors add a series of controls. Introducing human capital, the calculations show that it remains the main source of the pattern of specialization but, despite being correlated with delegation (which in turn has an effect on trust), the latter variable remains statistically significant. Then, two other controls are introduced: financial development and judicial quality. However, they do not affect the coefficient of trust, thus making the estimation more robust and consistent. The results are similar when the dependent variable is export.
For the international sample, the analysis is more complicated because different countries present different institutional dimensions, like labor market regulations and property protections. The results, however, are very similar, making their thesis consistent also at the international level.
Trust and the stock market
Guiso et al. (2008) study the effect of trust on stock market participation across individuals and across countries. Starting from Gambetta (2000), they define trust as “the subjective probability individuals attribute to the possibility of being cheated” (Guiso et al., 2008, p. 2557), which depends on the characteristics of the financial system and the individual priors and predisposition to trust.
Firstly, they develop a theoretical model in which they reproduce the effect of trust on portfolio decisions, starting with a two asset model (one safe asset and one stock). They assume that investors know the distribution of returns but they are worried, with a level of subjective probability p, about other bad events, like the possibility of fraud perpetrated by their broker, which will lead to 0 return in the stock. They also assume 0 participation cost. Given a level W of wealth, being the return in the stock investment and the risk free rate, each of the agents chooses a share of their wealth to invest in the risky asset so that they can maximize their expected utility
They also calculate that a risk averse individual will invest in the stock market if his subjective probability p > , where is = ( - )/ and is the mean of the true distribution of the returns of the stock. This last relationship comes from the fact that an investor invests in a risky asset if the expected return of investing is higher than the risk free rate, i.e. (1 - p) + p 0.
An important result of this model is that the decision to participate or not in the stock market depends on the subjective probability p of being cheated (since it reduces the expected return of the investment) and it does not depend on the level of W. Since W is not significantly correlated with trust (as calculated through the survey data they use in their empirical analysis), this can explain why also the wealthy might not engage in stock trading. Moreover, itself depends on the level of trust: more trust means more wealth invested in risky assets and vice-versa.
Then, participation costs are introduced in the theoretical model. To enter the market, the investor now has to pay a fixed cost f (thus reducing the allocable wealth to W - f). As f increases, in order to invest in stocks, a higher level of trust is necessary ( decreases). In particular, less trust reduces the return on stock investment (thus making the participation less attractive) because it reduces the share of wealth invested in stocks and it reduces the expected utility from participating.
Finally, the authors demonstrate that risk tolerance and trust are two different things by looking at the optimal amount of stocks: this number increases with trust and it increases also with risk aversion (for the benefits of diversification). Therefore, since risk tolerance reduces the optimal number of stocks and the contrary is true for trust, the latter cannot be a proxy of the former. As the empirical analysis will demonstrate, this is consistent with the data. This result is also reinforced by the fact that the authors find that individuals with high levels of trust buy more insurance, while risk tolerant individuals buy less.
The authors use survey data to test their model. In particular, they employ the DNB Household Survey (to which they have directly contributed), which maps about 1990 individuals and tries to capture their level of generalized trust, their risk and ambiguity aversion and their optimism. It also reports some statistics about households’ assets, distinguishing in particular between listed and unlisted stocks and securities held directly or through financial intermediaries. To measure generalized trust, this survey uses the same question as the World Values Survey (see section 3.2 for explanation); to measure risk aversion and ambiguity aversion, the authors ask the interviewed their willingness to pay for some lotteries; to measure optimism, they ask to quantify their agreement (on a scale from 1 to 5) with the following statement: “I expect more good things to happen to me than bad things”. Then, the Italian Bank customers survey is used to capture the personalized trust, i.e. the trust that an individual has towards its financial intermediary, which could be different from the general propensity to trust. This data set contains information about the financial assets the interviewed have and their demographic characteristics. More importantly, to measure personalized trust, the survey asks the following question: “How much do you trust your bank official or broker as financial advisor for your investment decisions?”.
The empirical analysis confirms their hypothesis. Starting from the study of the relationship between generalized trust (i.e the level of trust measured in the survey) on stock market participation, the authors find that trust has a positive and highly significant coefficient (so more trust means more participation), even after controlling for a number of variables (like age, sex and wealth). In particular, "Trusting others increases the probability of direct participation in the stock market by 6.5 percentage points” (Guiso et al., 2008, p. 2578). Risk aversion and ambiguity aversion do not seem significant, as well as optimism, since the coefficient of trust remains unchanged. Moreover, when studying the effect of wealth, the authors find that the coefficient of trust remains significant even after controlling for this variable, thus providing a proof for their previous statement: the lack of trust may be an explanation for the fact that rich people do not invest in stocks even though they should not be affected by the participation costs. Then, the relationship between trust and the amount invested in risky assets is studied. The result confirms, again, the hypothesis: "Individuals who trust have a 3.4 percentage points higher share in stocks, or about 15.5% of the sample mean" (Guiso et al., 2008, p. 2580). The same results hold for risky assets in general: risk and ambiguity aversion are not statistically significant also in this case. However, a significant control is represented by the level of education. The authors find that trust increases the holding of risky securities for everyone, but less in more educated people, since they know better how the market works with respect to the less educated and they are less affected by priors and cultural stereotypes.
Considering now the Italian Banks costumers survey, the results confirm the previous ones: trust in one’s own financial intermediary increases the probability of investing in stock and the share of the wealth allocated in this type of security.
Finally, the authors investigate the implication of the level of trust on market participation across countries. The analysis is based on the following statement: less trust should mean that agents are less willing to invest and, in turn, firms will be less willing to float their equity given that it is less rewarding. Therefore, countries with lower levels of trust should have lower participation in the market. The empirical analysis confirms the previous claims: trust has a positive and significant effect on stock ownership among individuals and it has also a positive effect on stock market capitalization.
Money as a substitute for trust
Gale (1978) develops a theoretical model to study the effect of the introduction of money in an economy characterized by a lack of trust between its agents. The author starts from the Arrow-Debreu model of Walrasian equilibrium. This model is characterized by a finite number of consumers (who have an initial endowment of resources) and commodities, perfect competition in all markets and constant return to scale. Moreover, markets are complete, which means that all transactions in the economy can be arranged at one time. This is made possible because transactions that involve the delivery of a commodity in a different time period (i.e. in t=0 a commodity is sold but the delivery will be arranged in t=1) can be concluded through contracts at time t=0. The contract specifies that the delivery will occur in t=1, even though the transaction itself is completed in t=0. Therefore, the contract is seen as the commodity being traded. This mechanism operates under the assumption that there is no uncertainty in the market. In such an environment, agents can trust each other to fulfill the contracts they have agreed upon. As a result, there is no need to distinguish between the contracts and their execution. Nevertheless, if for some reason agents start not to trust each other, and therefore uncertainty arises, some agents may prefer not to fulfill their promise and other agents, anticipating that, might not engage in a transaction in the very first place. If trust were to vanish, therefore, the allocation process would break down if no other substitutes were found. The scholar demonstrates that money can be a substitute for trust and it can permit the allocation and redistribution of resources even in the absence of trust.
To illustrate that formally, the author employs the concept of core that is “the set of attainable allocations such that (a) neither agent can make himself better off by remaining self-sufficient and (b) two agents cannot both be made better off by any feasible redistribution of their joint endowment.” (Gale, 1978, p. 459), through which he develops the concept of sequential core to integrate time periods and uncertainty about the outcome of a contract. An allocation of commodities is trustworthy if the sequential core applies to it, that is if it cannot be improved by any redistribution of resources in any time period. If this were not the case, an agent would have the incentive to break the contract in later periods. Therefore, any exchange of commodities without trust would not form a sequential core, because agents would have the incentive to deviate from equilibrium to increase their own utility.
To resolve this issue, the author introduces money in the model. In particular, each agent is given an endowment of money at time t=0 and it is assumed that at the end of time t=1 (the second and last period) the same amount of money must be returned as a tax. Implicitly, the model introduces a social institution (for example a government) that issues fiat money, which has no intrinsic value but it is guaranteed by the imposition of the government itself (this is the case in modern economies). In between the two periods, the agents can exchange money among themselves. This solves the issue: the agents who were reluctant to keep their promises in the model without money and without trust now have an incentive to fulfill the contract, given that they need the money to pay their taxes. Money does not restore trust among agents, but they act as a substitute, a way to enforce previous contracts and agreements. The possibility for the government to directly intervene in the fulfillment of contracts should be discarded, since it is not plausible that a human institution could be so almighty that it can oversee every transaction in a complex economy. Therefore, money can create the conditions for trustworthy transactions (without trust) in a decentralized way. However, the institution must be able to credibly impose the payment of taxes, otherwise agents would face the same problem as before. To do that, penalties for those who do not want to pay taxes should be sufficiently gruesome, but the author does not quantify the penalty. Moreover, the author argues that, despite money can substitute trust, there could be a loss in overall utility with respect to the case with trust. In the model, the social institution is introduced without any explicit cost, but this is unlikely to be the case in reality, since introducing a government that is able to enforce tax payment and issue securities is certainly not free.
The gruesome penalty
Grimes (1990) continues the work of Gale (1978) in analyzing the role of money in the same theoretical framework studied by the previous author. The results of Gale are confirmed: without money, the outcome of an economy without trust would be autarky, since no transaction can effectively occur. With the introduction of money, however, it is possible to replicate the allocation of the economy with trust. The contribution of his work with respect to the research of his predecessor is about a quantification of the gruesome penalty that agents face when they do not respect their tax obligations.
In particular, the author shows that the simple introduction of money does not necessarily replicate the outcomes of an economy without trust, because a sufficient incentive (i.e. a penalty higher than a certain threshold) to make inefficient for the agents not to fulfill their promises must be introduced. Under this threshold, the increase in utility derived from reneging the contract is higher than the reduction in utility due to the penalty. Therefore, the optimal choice is not to fulfill the agreement. On the contrary, above that threshold, the optimal choice is to fulfill the contracts (therefore replicating the allocation with trust). It is worth noting that the intensity of the penalty has no effect on the final allocation of goods, since they are already Pareto-efficiently allocated, but the author shows that this has an impact on prices.
To calculate the threshold, the author considers a world with two agents, two periods, no uncertainty and one good in each period. Each agent’s endowment in each period is defined as and , where is a small positive number. The other features of this world are the same described in the previous paragraph. His calculations show that, to replicate trust, the maximum penalty should be:
Trusting an institution
Meylahn et al. (2023) study the dynamics of the trust between individuals and institutions using a stylized model of social network learning. Firstly, the authors define a model to describe the relationship between only one individual and the institution, in which the agent has repeated opportunities to place trust. The institution’s behavior is modeled by a parameter that represents its trustworthiness, i.e. the probability that the institution honors the trust placed by the individual. So, in each round the institution honors the trust that has been placed by the agent with probability and abuses it with probability . Similarly, the agent, in each round, can decide whether or not to place trust in the institution. The decisions taken by the two are independent in each round and the agent observes the actions of the institution only when he places trust. If trust is honored, he gains r, while if it is abused he loses . Therefore, his expected utility is . The agent behaves with myopic rationality, so he maximizes the expected utility in each round without taking into consideration future rounds. Moreover, the agent starts the interaction with the institution having a prior belief P0, which is a function of and , which can be considered the number of times trust was honored and betrayed in a past setting, before the beginning of the experiment. The variables of interest are , the number of rounds after which the agent decides not to place trust anymore, through which determining the probability of quitting, and q, the expected time spent playing before quitting. In each round, the agent updates his knowledge by taking into consideration the actions taken by the institution and, therefore, he updates its estimation of . If the agent quits, he will never trust the institution again, given that there is no possibility to update his estimation of the trustworthiness of the institution.
Then, the authors define another model where another agent is added: the relationship between the two plays an important role in determining the relationship with the institution. The agents’ behavior and the institution’s behavior share the same characteristics as the model with one agent: the agents choose in each round whether to place trust or not, they have a prior belief and the institution decide whether to honor or betray the agents’ trust. The authors further assume that both agents share the same prior. The key feature of this model is that each agent, in each round, receives information from the other, through which he can update his information. Two cases are analyzed:
- Agents fully communicate with each other the interactions they have with the institution. Given that the agents have the same prior and the same information available, they will have the same estimate of .
- Agents do not communicate explicitly, but they only observe the actions of the other agent. Therefore, the information received from the other agent will be incorporated only a round later.
They run their model 4000 times for the single agent model and 2000 times for the dual agents model, for a maximum of 500 rounds. They find that the probability of quitting in most of the settings (i.e. in various calibrations of the parameters) is higher in the single agent model. When considering only the two agents model, the probability is higher when the agents can only observe the actions of the other but they are not able to fully communicate. However, there are some exceptions and in some simulations the observable actions setting outperforms the full communication model, thus having a lower probability of quitting. The expected time to quit is lower in the two agents model with respect to the case where there is one agent only, in particular in the model when they fully communicate (in which therefore they receive more information). This is due to the fact that having more information will make their estimations more precise: they either quit quickly or they do not, since they need less time to have a good estimation of and, if the estimation is not high enough, they will quit after fewer round, otherwise they are likely to place trust indefinitely.
Overall, the authors find that communication is always helpful since it increases the probability of continuing to trust a reliable institution and decreases the expected time of quitting an untrustworthy institution. Moreover, they find that more optimistic priors increase the possibility of trusting a trustworthy institution. Finally, they highlight that it is not possible to say which of the two agents model is better, since it depends on the parameters setting and which criterion taking into consideration.
Trust and the blockchain
As highlighted before, trust, with its dynamics, is fundamental in every aspect of a society and it is what permits societies in themselves to evolve and transform. Without trust, each individual would have the burden of verifying the reliability of every other agent he encounters, which would be impossible. Trust is also what permitted the birth of modern finance, with the Buttonwood agreements of 1792 that led to the creation of the stock market. In recent years, however, trust within modern societies is decreasing, putting at risk the way the society in itself operates. People not only do not trust each other anymore, but they also do not trust the government, or the media, or any other authority that once was considered credible and reliable. It is in this framework that “a new architecture of trust” was developed, leading to the birth of bitcoin and the blockchain technology in 2009. Werbach (2018) analyzes the relationship between trust and the blockchain in his book “The blockchain and the new architecture of trust”.
What are the blockchain and Bitcoin
The blockchain is a distributed and decentralized digital ledger (i.e. a record of accounts) that records transactions across a network of computers in a secure, transparent, and tamper-proof manner. In a blockchain, transactions are grouped into blocks, which are linked together in a chronological and linear order, forming a chain of blocks. Each block contains a list of transactions, a timestamp, and a reference to the previous block in the chain, creating a verifiable record of all transactions that have ever occurred on the network. One of the key features of a blockchain is its consensus mechanism, which ensures that all participants in the network agree on the state of the ledger. Once a block is added to the blockchain, it is considered immutable, meaning that the data in the block cannot be altered or deleted without the consensus of the majority of the network. This makes blockchains secure and resistant to tampering or manipulation. The transactions registered on the blockchain are performed through smart contracts, which are pieces of code that execute a predetermined function, like transferring a bitcoin, with no possibility to alter the agreement. Finally, a cryptocurrency is a digital currency that runs on the blockchain network.
Bitcoin, introduced by Nakamoto (2009), was the first digital currency and the first example of the blockchain. It relies on 3 elements: cryptography, digital cash and distributed systems. Cryptography can be considered as the science of secure communications and it is employed for this purpose in the blockchain technology. Each agent that interacts with Bitcoin is identified with a private key associated with a public key through the mechanism of cryptography, so that each transaction can be verified and associated with a user without the need to disclose his private key. What is called coin is in reality a chain of signatures of verified transactions. Bitcoin comes from the unspent output of previous transactions, all register on the blockchain. Each transaction is verified by a network of nodes (i.e. a participant in a distributed network that maintains a copy of the blockchain ledger and participates in the consensus process). All the agents need to trust the state of the ledger: this is achieved by the consensus mechanism. Consensus comes from a process called mining, in which agents compete to verify the transactions and create a new block of the blockchain, in exchange for a reward (transaction fees and newly mined bitcoins). The winner is randomly decided, but all the other agents verify independently that the new block is legitimate. Being untrustworthy is not profitable: mining is an expensive activity, because miners engage in a proof of work system, where they have to solve a cryptographic puzzle to have the right to validate the transaction. This requires energy and money and the more energy and money an agent put into mining, the more chances he will have to win. The benefits of cheating are much lower than the costs, so in this way each agent can trust the state of the ledger because there are no incentives to deviate. Finally, the consensus mechanism has also the objective to make the ledger immutable because each transaction is recorded from the hash of the previous block. Changing a past block would mean forking the chain, and this would be rejected by the majority of users. Only in the case that an agent has more than 50% of the computing power (which is almost impossible) this change would be viable.
A new form of trust
The innovation of the blockchain is connected to the fact that every participant can trust the information recorded on the ledger without necessarily trusting another agent to validate it. There is no need for a central authority to validate the transactions and trust is reinforced by the fact that there are mechanisms that make impossible to alter the transactions already recorded on the ledger. The idea of Satoshi Nakamoto was to design a system that, through incentives, made the needs and objectives of every participant aligned with each other, so that what is recorded on the ledger can be trusted without trusting (or knowing) the other agents. Nakamoto claimed to have eliminated the need for trust but, according to Werbach (2018), that would be impossible. What Nakamoto created is trust in “a new architecture of trust”, where independent agents run this technology, validating the transactions so that they can be recorded on the ledger. This is reinforced by the fact that distributed ledger networks make people work together in a way that otherwise would not have been possible since they would not have trusted each other sufficiently.
To better understand what he means by a “new architecture”, the author firstly outlines the various architectures (which define as "the ways the components of a system interact with one another" (Werbach, 2018, p. 25) ) of trust that humans have developed over time. The main architectures are:
- Peer to peer (P2P): here, trust is based on a face to face relationship that arises because the agents share ethical norms and mutual commitment. The downside of this architecture is that this is possible for only a few people and small communities, given that the knowledge of each other is pivotal in creating trust.
- Leviathan: this vision starts from the belief that humans are not fully trustable and therefore a powerful third party, the state/government, is needed to enforce private contracts and property rights. This is achieved through the monopoly of violence held by the state: people can now trust each other because, if something goes wrong, the leviathan can punish the guilty and enforce previous commitments.
- Intermediaries: transactions are guaranteed by a third party (different from the government), which is trusted to perform certain actions. They create the possibility to perform certain transactions that in a peer to peer network would have been difficult: the other agent is trusted because there is an intermediary that makes the transaction happen. Examples are e-commerce platforms such as Amazon, or financial services companies.
The new architecture of trust created by the blockchain is defined as a “trustless trust”. Without trust it would fail since no engagement between individuals is possible without a form of trust, but if it relied on old trust structures it would not be a revolution and would fail its primary object. On the blockchain network, no agent is assumed to be trustworthy, but the output of the network is. Generally speaking, in every transaction, the counterpart, the intermediary and the dispute resolution mechanism must be trusted, but the blockchain substitutes these elements with code. There is no possibility to assess the other party’s trustworthiness, since all agents are represented by private\public keys in the network which allow for their anonymity; there is no central intermediary, since the platform is a distributed machine operated by all the participants; the disputes are solved through pieces of codes called smart contracts, that perform a certain action with no possibility to stop them. Transactions are verified through cryptographic proofs that other agents can verify mathematically. Therefore, it is not possible to frame this system within the common architectures: it is not a P2P since the other parties are unknown, there is no central authority and also there is no central intermediary since the platform is operated in a decentralized way. Each agent needs to trust the network and not each agent with whom he is engaging in a transaction. The blockchain (and Bitcoin) seems the perfect solution for the lack of trust in the modern society and for the problems that the previous architectures of trust presented. The fact that Bitcoin was born after the Great Financial Crisis is not random. P2P relationships were not sufficient in a world so deeply interconnected, intermediaries were considered the cause of the crisis itself and the Leviathan, i.e. the government, was not able to foresee the crisis and prevent it.
Blockchain trust relies also on the immutability of the information recorded, through the mechanisms beforehand explained. However, immutability must be understood in a probabilistic way. The more blocks are added, the more the previous transactions will be immutable because it would require an infinite amount of power to alter the transactions. Each agent can decide after how long they trust the state of the ledger. Therefore, blockchain trust is not instantaneous. Moreover, the transparency of the ledger, meaning that the record of every transaction is publicly available and the software itself through which the blockchain operates is open source, is an important characteristic that increases trust. Finally, blockchain’s trust is algorithmic, meaning that it relies on algorithms to maintain the system: what must be trusted are not the people operating on it, but the software and the math behind the consensus process.
Satoshi’s error was to believe that in his architecture trust was absent, while in reality it reduced the need of trusting some part of the system. Trust is needed and the blockchain could not function without it. Firstly, engaging in a transaction in a system without central control and with immutability means that no one is able to oversee the transaction and amend it if something is wrong. Agents can be confident that the transaction will be correctly registered, but a distributed ledger will not be able to verify if the content is legitimate and, if something is wrong with the transaction itself, there is no possibility to reverse it: smart contracts are unstoppable. Moreover, humans are not out of the system entirely, which means that errors and misunderstandings can occur. And the cryptographic techniques are still vulnerable to attacks: they may be difficult to perform, but users that engage with a blockchain need to trust that this will not happen.
The author argues that the success of the blockchain as an architecture of trust will depend on its governance. The blockchain is a way to enforce some rules, but it is also a product of some rules designed by humans, which therefore would need a governance to continue to operate and to decide the next rules of the game. Moreover, the law should regulate the blockchain framework: without legal rules, the blockchain could be used as an instrument by criminals and terrorists (for recycling money for example), and this would reduce the trust that normal people put in this system. Crypto enthusiasts argue that the role of law would be replaced by smart contracts, but codes cannot fully formulate human intentions, which are an important part behind private contracts, and this could create misunderstandings between the parties. The law can intervene where smart contracts are not able to. Finally, also regulation can play an important role in developing the future of the blockchain and fostering its trustworthiness, as it does with other financial instruments and institutions.
Trust and blockchain in practice
Some scholars have started to think about how trust between users can be enhanced in real blockchain applications. You et al. (2022) find the main challenge to be the fact that there is no consensus about how to measure trust in the blockchain environment. Therefore, they develop a framework to do that, creating a system based on subjective ratings of trustworthiness. The authors start by identifying six different blockchain applications, considering which factors can be used to measure trust in each specific domain. Identifying the key factors behind trustworthiness is essential for creating a system to enhance trust. In particular:
- Supply chain: it is possible to measure how trustworthy the supplier is by the average order arrival time and the defect rate, and how trustworthy is the buyer by the number of days for payment.
- Healthcare industry: to assess the trustworthiness of these firms, regulatory compliance proof, claim approval rate and drug prescription regularity can be the starting point.
- E-commerce: to assess the trustworthiness of those firms, the accuracy of ratings provided by the users and the security of payments represent the most important features.
- IoT devices: system security data and the reliability of the data provided by these devices are the most important features.
- Finance: pivotal factors are the security of transactions and data and the efficiency and quality of communications.
- Social media: news and reputation credit represent the most important characteristics to assess trustworthiness.
The problem of the blockchain is that, although the information recorded cannot be modified easily, the data may not always be true: the need for accountability arises because of this fact.
The system presented by the authors is based on trust scores given by agents that interact with other agents on the blockchain applications. Initially, there would be no score, since no transaction has occurred yet. Then, the two parties start to interact and they begin to collect trust factors about each other. The specific factors, which are described above, will depend on which application is under consideration. Then, each actor will give his score, which will be recorded on the blockchain and will be available for other users, who are now better informed regarding the other users of the blockchain application and can decide to interact with them or not. The validity of the scores will be ensured by the fact that each user will have followed the KYC validation procedures before interacting on the application and it will possible to identify the particular participant from the outside through verifiable credentials. Therefore, no rating will be anonymous.
This system may increase the trust between users because they are incentivized to adhere to the common organizational norms of each sector, because otherwise they would damage their reputation by having a low score permanently recorded on the blockchain. Therefore, this model may create a set of incentives to align the two sides of each transaction.
Trust games in the blockchain
As explained before, the blockchain system employs game theory and incentives to make the agents act honestly on the network. After the work of Satoshi Nakamoto, several papers were developed to study the incentive structures and the games behind the blockchain and its consensus mechanism.
Breiki (2022) studies how trust among players evolves over time when they perform trust evolution games. To do that, he defines the features of its abstract game. Firstly, the author identifies the parameters of the model: there are various miners and each of them has the possibility to cooperate (acting honestly) or defect (cheating); there is a vector of probabilities that defines the likelihood of each player to succeed in solving the puzzle, which is proportional to their computational power; there are the costs and rewards of mining, taking into account also the propagation delay (i.e. the time needed to validate a transaction); there is the market value. Moreover, the author uses two learning algorithms: fictitious play, where prior beliefs are defined; satisficing learning, where aspiration level of payoff and learning rates are defined. All in all, the author finds that the players learn to cooperate in the game to get a better payoff and, for satisfice players, lower learning rates increase the final payoffs.
J. Zhang and Wu (2021) study evolutionary game theory applied to the blockchain network to understand the strategies and incentives of the participants and their cooperative behavior. The authors explain that the blockchain is a perfect environment for evolutionary game theory because:
- There is information symmetry, since all individuals have and share the same information on the network and each participant has complete transactions data.
- All the participants are equal so no party has a dominant advantage when the game begins.
- Participants are prone to trust each other and engage in the game because of the cryptographic mechanisms, which make the environment credible and immutable.
- The process of adding new blocks can be seen as a form of repeated games.
Agents have bounded rationality, since they cannot get global information because the network is complex, and therefore they are not fully able to maximize their payoffs. Each participant can have two possible behaviors, cooperation or defection, and they update their strategy considering the maximum payoff. Indeed, during the generation of new blocks, each agent is able to learn from his actions and the actions of the winners.
The model developed comprises two groups of miners: group A, with the inclination for cooperation, and group B, inclined to cheating. Participating in each group has a cost and and each game brings a revenue R, which will be rewarded to the participants. Each group will have different benefits (for group A, transactions fees and mining rewards, for group B illegal revenue). Finally, there are also punitive measures, denominated P.
Each player, in both groups, can decide which strategy to adopt. Their payoffs are:
The authors assume that, at the beginning, each participant has a decided strategy to start with. The goal is to examine the change in the population of honest and dishonest agents after several rounds, taking also into account changes in the parameters of the game. The authors argue that, unlike classic evolution games, the relationship among agents in the blockchain is random. Moreover, the size of the network matters: with small networks, the emergence of cooperative behaviors is easier. Finally, a definition of the evolutionary stable strategy (ESS) is given. The ESS is “a strategy that other strategies cannot invade” (J. Zhang & Wu, 2021, p. 5).
The authors run various simulation. Firstly, 67% of group A are honest agents and 20% of group B are betrayers. Here, the honest strategy is an ESS, since the number of betrayers tends to 0 as the rounds increase. However, as the expected payoff of group B augments, the honest strategy is still an ESS but weak, because higher payoffs with the dishonest strategy tend to tempt agents to cheat. Therefore, future revenue expectations influence the behavior of participants in the blockchain. Then, the influence of the network structure is analyzed within the group A population. They use a Watts-Strogatz (WS) small-world model and a Barabasi-Albert (BA) scale-free network model. The authors find that it takes quite a lots of rounds for honest agents to establish a trusting cooperative relationship and this may represent an opportunity for cheating agents in the blockchain. Therefore, security is a relevant topic especially in the initial stage of the blockchain.
L. Zhang and Tian (2023) develop a Byzantine consensus protocol (i.e. a consensus protocol where there are faulty and malicious agents) on the blockchain, shaping it as a dynamic game. Their main contribution to the current literature relies on the fact that the agents in their model have bounded rationality and can learn from the historical observations. In particular, this means that the participants can choose among a limited set of strategies (honest or dishonest), they are able to learn from historical observations and they choose their strategy accordingly, taking also into account the current state of information, but they are not able to forecast the future. Moreover, they are allowed to have inconsistent subjective beliefs about the probability of meeting agents with their same strategy: each agent believes that, for a portion m of rounds, he will meet a proposer with the same strategy, ranging from m=1 (meaning that he and the proposer will always have the same strategy in every round) to m=0 (meaning that he and the proposer will have the same strategy only by chance). Their model, based on a BFT (Byzantine Fault Tolerance) consensus protocol, consists of the following features:
- The agents, before the game, are selected to form different parallel committees, which compete in a n round mining game and which will not change until the end of the game.
- Each agent has one vote in the mining game.
- In each round, an agent is randomly selected to make a proposal about the validity of a block. The other agents become validators and they vote if the block in the proposal is valid. The block will be validated if the number of votes is higher than v, a majority threshold.
- There is a reward R for validating the block, a cost for verifying a transaction, a cost for voting for a transaction and a penalty k that validators encounter if they misbehave.
Before each round, each validator checks if their congeners (the other nodes in the network) have pivotality, i.e. the ability to control the consensus outcome because they have the majority. As specified before, the participants have two strategies: the honest strategy, where miners achieve the consensus protocol and the Byzantine (dishonest) strategy, where miners damage the consensus protocol. The authors assume that participants are fixed and no one would quit. Initially, a number of miners choose the honest strategy in the first turn.
The authors specify the concept of stable equilibrium, which can be defined as the situation when , so when the portion of miners which perform an honest strategy remains stable (so the number of agents that changes their strategy from honest to dishonest is equal to the number of agents that changes from dishonest to honest). There are 3 possible stable equilibria:
- The honest stable equilibrium, where no agent is cheating, so .
- The Byzantine stable equilibrium, where all agents are cheating, so .
- The pooling stable equilibrium, where both strategies exist, so .
Each equilibrium can be reached depending on the number of initial cheater/honest miners, their belief m, the cost-reward mechanism and the pivotality rate (i.e. the minimum percentage of nodes that must agree to reach consensus and add a block). They find that only the honest stable equilibrium can support the safety, the liveness (so the fact that all non faulty agents should have output) and the validity (the fact that all participants have the same valid output) of the blockchain. Moreover, they find that if the reward-punishment increases, the blockchain will become safer and the honest stable equilibrium will be easier to achieve, while if the cost-punishment ratio increases, the safety and the liveness of the ledger are threatened and the honest equilibrium is more difficult to achieve. Finally, if the pivotality rate increases, every stable equilibrium is harder to achieve.
Trust in algorithms
Algorithms are becoming more and more important in everyday life, from health care to criminal justice systems, contributing to decision making processes in many fields. Therefore, the natural question of whether it is possible to trust algorithms arises.
According to Spiegelhalter (2020), the trustworthiness of an algorithm comes from the claims made about the system, including how its developers describe how the system works and what it can do, as well as the claims made by the system itself, which refer to the algorithm’s responses and output regarding specific cases. Therefore, he proposes a model to assess and boost trustworthiness in algorithms.
Regarding the first kind of claims, developers should clearly state what the benefits and drawbacks of using their algorithms are. To assess that, the author proposes an evaluation structure of 4 phases:
- Digital testing: the algorithm accuracy should be tested on digital datasets.
- Laboratory testing: the algorithm’s results should be compared with human experts in their field. An independent committee should evaluate which response is better.
- Field testing: the system should be tested on field, to decide whether it does more harm or good, considering also the effects that it can have on the overall population.
- Routine use: if the algorithm passes the 3 previous phases, it should be monitored continuously, in order to solve problems that can eventually arise.
Having explicit positive evidence in all these phases would boost the trustworthiness of the claims made about the system by developers.
Considering the second type of claims, to reach a higher degree of trustworthiness, it is necessary that the algorithm specifies the chain of reasoning behind its claims, which are the most important factors that led to its output and which is the uncertainty around the claim. Moreover, also a counterfactual analysis should be performed (i.e. what would be the output if the input changed). Overall, algorithms should be made clearer and more explainable and transparency can play an important role in that. To increase trustworthiness, an algorithm should be accessible and intelligible by people, it should be useable, so have an effective utility, and it should be assessable, so the process behind every claim should be available. Ultimately, it should show how it works. And more importantly, it should also clearly state its own limitations, so that trustworthiness does not become blind trust.
Conclusion
Trust plays a pivotal role in ensuring the existence and development of modern societies. This paper provides a comprehensive summary of the current literature on how trust relates to the world of economics and finance. To begin with, the concept of trust is clearly defined and differentiated from other human sentiments such as cooperation and confidence. The notion of risk is also discussed, as a complete assessment of the other party’s actions is antithetical to a trust relationship. Furthermore, various methods for measuring trust are explored, with trust games being the most commonly used tool by researchers. The paper goes on to explain how trust is a source of comparative advantage, which determines trade patterns. Additionally, trust is linked to stock market participation, with more trusting individuals being more likely to invest in risky assets and, conditional on participating, they allocate a larger portion of their wealth. While complete trust among individuals would potentially be beneficial, it is not possible in the real world. However, money can act as a substitute that replicates the allocations of a trustworthy economy. The paper also emphasizes the importance of trusting institutions, especially in a world where trust is lacking: effective communication among individuals is critical in assessing the true trustworthiness of an institution. The paper then delves into the relationship between trust and blockchain technology, which can be seen as a new architecture of trust. Moreover, various authors have developed blockchain trust games to better understand the best consensus mechanism process. Finally, the importance of trusting algorithms is highlighted, given their widespread use in everyday technology, healthcare, and the justice system.
References
- Alós-Ferrer, C., & Farolfi, F. (2019). Trust games and beyond [Accessed on April 19, 2023]. Frontiers in Neuroscience, 13. [1]
- Bajos, N., Spire, A., Silberzan, L., Sireyjol, A., Jusot, F., Meyer, L., Franck, J.-E., Warszawski, J., Bagein, G., Counil, E., Jusot, F., Lydie, N., Martin, C., Meyer, L., Raynaud, P., Rouquette, A., ... Spire, A. (2022). When lack of trust in the government and in scientists reinforces social inequalities in vaccination against covid-19 [Accessed on May 7, 2023]. Frontiers in Public Health, 10. [2]
- Berg, J., Dickhaut, J., & McCabe, K. (1995). Trust, reciprocity, and social history. Games and Economic Behavior, 10(1), 122–142.
- Bohnet, I., & Zeckhauser, R. (2004). Trust, risk and betrayal [Trust and Trustworthiness]. Journal of Economic Behavior & Organization, 55(4), 467–484.
- Breiki, H. (2022). Trust evolution game in blockchain [Accessed on April 20, 2023]. 2022 IEEE/ACS 19th International Conference on Computer Systems and Applications (AICCSA), 1–4. [3]
- Burnham, T., McCabe, K., & Smith, V. L. (2000). Friend-or-foe intentionality priming in an extensive form trust game. Journal of Economic Behavior & Organization, 43(1), 57–73.
- Cingano, F., & Pinotti, P. (2016). Trust, firm organization, and the pattern of comparative advantage. Journal of International Economics, 100, 1–13.
- Edelman. (2022). Edelman trust barometer 2022 [Accessed on April 22, 2023]. [ https: //www.edelman.com/sites/g/files/aatuss191/files/2022-01/2022%20Edelman% 20Trust%20Barometer%20FINAL_Jan25.pdf]
- Fehr, E., Fischbacher, U., Rosenbladt, B. v., Schupp, J., & Wagner, G. G. (2003). A nation-wide laboratory examining trust and trustworthiness by integrating behavioral experiments into representative surveys [Accessed on May 2, 2023]. Working paper / Institute for Empirical Research in Economics, 141. [4]
- Gale, D. (1978). The core of a monetary economy without trust. Journal of Economic Theory, 19(2), 456–491.
- Gambetta, D. (2000). Can we trust trust? Trust: Making and Breaking Cooperative Relations, electronic edition, Department of Sociology, University of Oxford, 213–237.
- Glaeser, E. L., Laibson, D. I., Scheinkman, J. A., & Soutter, C. L. (2000). Measuring Trust*. The Quarterly Journal of Economics, 115(3), 811–846.
- Grimes, A. (1990). Bargaining, trust and the role of money. The Scandinavian Journal of Economics, 92(4), 605–612.
- Guiso, L., Sapienza, P., & Zingales, L. (2008). Trusting the stock market. Journal of Finance, 63, 2557–2600.
- Houser, D., Schunk, D., & Winter, J. (2010). Distinguishing trust from risk: An anatomy of the investment game. Journal of Economic Behavior & Organization, 74(1), 72–81.
- Kosfeld, M., Heinrichs, M., Zak, P. J., Fischbacher, U., & Fehr, E. (2005). Oxytocin increases trust in humans. Nature, 435(7042), 673–676.
- Lenton, P., & Mosley, P. (2011). Incentivising trust. Journal of Economic Psychology, 32(5), 890–897.
- Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. The Academy of Management Review, 20(3), 709–734.
- Meylahn, B. V., den Boer, A. V., & Mandjes, M. (2023). Trusting: Alone and together [Accessed on May 9, 2023]. [5]
- Nakamoto, S. (2009). Bitcoin: A peer-to-peer electronic cash system [Accessed on April 19, 2023]. Cryptography Mailing list at [6].
- Spiegelhalter, D. (2020). Should We Trust Algorithms? [Accessed on April 30, 2023]. Harvard Data Science Review, 2(1). [7]
- Warren, M. (2018). Trust and Democracy. In The Oxford Handbook of Social and Political Trust. Oxford: Oxford University Press.
- Werbach, K. (2018). The Blockchain and the New Architecture of Trust. Cambridge: The MIT Press.
- You, S., Radivojevic, K., Nabrzyski, J., & Brenner, P. (2022). Trust in the context of blockchain applications. 2022 Fourth International Conference on Blockchain Computing and Applications (BCCA), 111–118.
- Zak, P. J., Kurzban, R., & Matzner, W. T. (2005). Oxytocin is associated with human trustworthiness. Hormones and Behavior, 48(5), 522–527.
- Zhang, J., & Wu, M. (2021). Cooperation mechanism in blockchain by evolutionary game theory [Accessed on April 19, 2023]. Complexity, vol. 2021. [8]
- Zhang, L., & Tian, X. (2023). On blockchain we cooperate: An evolutionary game perspective [Accessed on April 28, 2023]. [9]