The role of trust in economics and finance

From Fintech Lab Wiki
Revision as of 10:04, 12 May 2023 by 3122188 (talk | contribs) (Created page with "{{DISPLAYTITLE:The role of trust in economics and finance}} Contribution of SIMONE GOZZINI =1. Introduction= Trust is a fundamental sentiment, the binding force behind...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)


Contribution of SIMONE GOZZINI

1. Introduction

Trust is a fundamental sentiment, the binding force behind modern societies: without it, no progress would have been possible. Trust is what permitted the birth of modern finance with the Buttonwood agreement of 1792, it is what makes people able to rely on another person or organization without the continuous need to assess what the other party is doing and it is ultimately what permits the existence of modern democ- racies (Warren, 2018). Without it, no meaningful relationship would be possible. However, it the last few years, a general disbelief about trust is permeating the civil society. Edelman is a global communication firm that every year makes a comprehen- sive survey about trust. For 2022, the results picture a general sentiment of distrust across all segments of the population: 60% of the interviewed say that their default tendency is to distrust others, media are seen as a divisive and untrustworthy institu- tion by around 50% of the people and trust in government is significantly dropping, year after year. The problem is particularly accentuated regarding governments, which are seen unable to fix societies’ problems (Edelman, 2022). Distrust affects modern societies as a whole, impacting not only social relationships and the economy, but also human health: for example, lower trust in government has lead to lower vaccinations against COVID-19, threatening the society as a whole (Bajos et al., 2022).

This paper highlights the importance of trust in modern economies and in the finan- cial world. Section 2 describes the concept of trust, differentiating it from other human sentiments like cooperation and confidence. In general, the concept of risk is concerned, given that trust involves a sort of faith in someone or something. Section 3 describes the various methodologies used in the literature to measure trust: trust games, surveys and the frontier of neuroscience. Section 4 presents trust as a source of comparative advantage in world trade patterns: societies with more trust have bigger and more productive firms. Section 5 studies how trust affects stock market participation: peo- ple with an higher tendency to trust are more likely to participate in the stock market and, conditional on participating, they invest an higher fraction of their wealth. Sec- tion 6 describes a general equilibrium model where money are seen as a substitute of trust: the allocation of resources in a trustworthy society can be reached also in a trust-less society which employs money. Section 7 describes a stylized model of trust between individuals and an institution: the exchange of information among individ- uals is found to be a tool that improves the assessment of the true trustworthiness of an institution. Section 8 presents the blockchain technology as a new architecture of trust, describing also how trust can be enhanced to reach an higher diffusion and application of this technology. Section 9 presents various paper regarding trust games in the blockchain technology, considering in particular how to reach and improve the consensus process. Section 10 describes how algorithms, which are becoming more and more import in modern life, can be trusted: in particular, the author highlights transparency and accessibility as fundamental characteristics to enhance trust. Sec- tion 11 concludes.

2. The concept of trust

According to the definition of Gambetta (2000), trust is “a particular level of the subjective probability with which an agent assesses that another agent or group of agents will perform a particular action, both before he can monitor such action (or independently of his capacity ever to be able to monitor it) and in a context in which it affects his own action” (Gambetta, 2000, p. 5) This definition highlights important concepts:

  • Trust is a probability p, a threshold, but subjective: people engage in a trust relationship if they believe that the probability that the person will perform the particular action mentioned in the definition is higher than a certain level, which depends on the individual predisposition to trust and the circumstances under which the relationship is being created, like the cost of misplacing trust.
  • Trust is related to uncertainty: the underlying assumption is that the agent is not able to fully monitor the other agent while he performs the particular action (this is usually the case in practice), otherwise trust will not be necessary since the first could keep track of the second while performing the action.
  • It has an impact on the trustor, otherwise the actions of the second agent would not matter to the first and engaging in a relationship would not be necessary.

Trust is relevant when the other agents (trustees) are free to betray the trustor, otherwise, if coercion intervenes, the outcome of the trust relationships is known ex ante. Assessing a probability would not be needed anymore and uncertainty would not be involved: the resulting interaction would not be a trust relationship, given that it lacks of its fundamental characteristics. Furthermore, also the trustor needs to be free to choose whether to engage in this relationship or to escape from it, otherwise also in this case assessing p would not be needed since he would have no choice. Therefore, trust is fundamentally a free choice between two individuals who seek mutual benefits and it involves a level of risk for the trustor, given that it affects his personal sphere of action.

2.1 Cooperation and trust

Trust, however, must not be confused with other human actions, sentiments, or beliefs. Trust is different from friendship, from passion, from loyalty and from cooperation. In particular, the relationship between the latter and trust is investigated extensively by the author.

Trust can be seen as:

  • a precondition for cooperation, which, together with a sane competition is ben- eficial to foster human progress. However, although being probably the most efficient way to achieve cooperation, it is not a necessary condition: people have used surrogates in history to overcome the problem of the lack of trust, like coercion, contracts and promises. All of them have the objective of diminishing the possible alternatives that the trustor and the trustee can face, thus reducing the risk for both parties in engaging in this relationship. A higher level of trust increases the probability of cooperating, but it is possible that, even though the level of p is low, the result is cooperation anyway. This is because an agent takes also into consideration the cost and the benefit of engaging (or not) in such a relationship, the other alternatives he has and the specific situation.
  • an outcome of cooperation. Societies may engage in cooperation thanks to “a set of fortunate practices”, particular circumstances and the need to satisfy mutual interests, for which the cost of not engaging in cooperation is higher than the risk of engaging. Trust is therefore the outcome of these practices and there is no need of prior beliefs about the trustworthiness of the other party, since trust will arise only after the beginning of the relationship, when information are collected. This statement is reinforced by the fact that cooperation exists in animals, that are unlikely to experience trust.

However, according to the author, there is no reason for saying that cooperation is a spontaneous equilibrium in human interaction: cooperation is just as likely as non cooperation. A predisposition to trust may be rational for humans in order to achieve their objectives, since trust is fundamentally an efficient way to achieve cooperation, but it is not necessary to wait for trust to evolve in order to initiate cooperation. Common interests and constraints can be enough and they can be beneficial especially in underdeveloped countries which present low level of trust. In fact, although trust and trustworthiness can be advantageous for an individual’s purposes, they cannot be artificially induced in a rational person. Moreover, the author argues that rational trustors and trustees may seek and present evidence to trust and being trustworthy. However, more information cannot fully solve the problem of trust. People, once they trust, do not try to find evidences to corroborate their belief, but rather they change their mind only if they find contrary evidence, which is not easy.

2.2 The concept of trust in organizations

Mayer et al. (1995) starts from the studies of Gambetta to further develop the un- derstanding of trust in the context of organizations. In particular, they focus on the trust relationship between two individuals: a trustor who trusts or not an individ- ual to perform a particular action and the trustee who receives the trustor’s trust, deciding then if fulfilling or not that action. The flow of trust is unidirectional: mu- tual trust between two parties is not developed in the paper, nor trust in a social system. In particular, according to the authors, the concept of vulnerability is what is missing in the definition of Gambetta, given that “Trust is not taking risk per se, but rather it is a willingness to take risk.” (Mayer et al., 1995, p. 712). Then, trust is differentiated from different constructs like:

  • Cooperation, which is intensively studied also by Gambetta. The authors high- light the fact trust is not a conditio sine qua non for cooperation, since it is possible to cooperate with someone not trustworthy (for example when there are external controls and constraints like discussed in the previous section).
  • Confidence: the main difference relies in the fact that, with trust, risk must be assumed, while in the second it is not necessary. Moreover, when a person chooses to trust, he will consider a set of possible alternatives, while that is not the case with confidence.
  • Predictability: trust and predictability are way to cope with uncertainty but, if a person is predictable it does not necessary mean that it is worth putting trust in him. This is because it is possible to predict that the other person will consistently behave in negative ways (and the uncertainty is reduced), but no rational individual would put trust in him.

Then, the characteristics of the trustor and the trustee are analyzed, which together can initiate a trust relationship between the two agents. The most important feature of the trustor is his propensity to trust another person, which is a personal trait con- stant over time and across situations. It is a general willingness to trust and it is not related to another party, since it is measurable before any interaction with the other agent. However, each trustor has different levels of trust for various trustee, which arise after the relationship is initiated. Therefore, they depend on the characteristics and actions of the trustee, i.e. his trustworthiness. According to the authors, there are 3 main characteristics of the trustee that are able to explain trustworthiness:

  • Ability: it is defined as the set of skills and competencies of the trustee over a specific domain. It is possible to trust another person to perform a particular action if the other agent is competent in that field otherwise he should not be trusted, even though he may be committed to complete the task. Therefore, trust should not be intended in absolute terms, but over a specific field of knowledge.
  • Benevolence: it is a personal trait of the trustee towards the trustor which is related to how much the former wants good for the latter. More benevolence leads to higher trust because the trustor can be more sure that the trustee will perform the action taking into account also his benefit and not only the trustee’s egoistic motives.
  • Integrity: it is defined as “the trustor’s perception that the trustee adheres to a set of principles that the trustor finds acceptable” (Mayer et al., 1995, p. 719). A trustee integrity therefore depends on what the trustor’s set of beliefs are. If the trustor thinks that the integrity of the trustee is not sufficient, he will not engage in a trust relationship with him.

In particular, integrity will be central in the early stages of the relationship, before gaining any insights; then, benevolence will become important over time, as the trustor retrieves information during the course of the relationship; ability, instead, continues to stay important from the beginning to the end. After engaging in the trust relationship, the trustor will be able to gain new data and information, through which he can update his beliefs about these three characteristics of the trustor, eventually deciding whether the placement of trust is still reasonable. While a trustor tries to assess these characteristics of the trustee, the role of context becomes important because it affects ability (for example because a change in a situation may change the skills needed to complete a certain task), the level of benevolence (for example if the trustee changes its behavior during time) and integrity (for example because a certain action of the trustee is not interpreted as coherent with the trustor’s set of values only because it was obliged to do so by the specific situation).

Finally, the authors deal with the risk involved in trusting. In particular, they high- light the fact that there is no risk taking in the propensity to trust, but risk arises only when an agent effectively engage in a trust relationship. However, the form and the level of risk assumed by the trustor will depend on the level of trust involved in a relationship: the more the trustor trusts the trustee, the more risk he will be willing to take. So, before initiating a trust relationship, the agent has to assess whether the level of trust is higher or lower than the perceived level of risk, so that he can decide whether it makes sense to engage in such relationship.

3. How to measure trust

Trust in therefore a fundamental device in human society and it is also important in economics and finance, as this paper will later explain. A natural question arises: how it is possible to measure trust? The question is not easy to answer, since trust is a hu- man sentiment, therefore subjective and emotional, and which is also interwoven with other human sentiments and beliefs. Al ́os-Ferrer and Farolfi (2019) review the major methods used by the literature to measure trust, underlying the main limitations of each model.

3.1 Trust Games and Game Theory

Experimental economics has intensively relied on game theory to quantify trust. The games mostly used nowadays are various version of the TRUST GAME, which was invented by Berg et al. (1995). Al ́os-Ferrer and Farolfi (2019) describe it as follows: “A first agent, called the trustor, is given a monetary endowment X, and can choose which fraction p of it (zero being an option) will be sent to the second agent, called the trustee. The transfer p · X is then gone, and there is nothing the trustor can do to ensure a return of any kind. Before the transfer arrives into the trustee’s hands, the transfer is magnified by a factor K > 1. The trustee is free to keep the whole amount without repercussion. Crucially, however, the trustee has the option to send a fraction q of the received transfer back to the trustor, hence honoring the trustor’s initial sacrifice” (Alo ́s-Ferrer & Farolfi, 2019, p. 1). The transfer of the trustor can become a measure of trust, while the subsequent transfer of the trustee a measure of trustworthiness. These games underline some important features of trust as described by Gambetta (2000): trustor and trustee decisions are free and voluntarily, uncertainty and risk are involved and there are possible repercussions for the trustor (a loss in utility).

However, despite the popularity of this method, there are various limitations that need to be addressed. In the agents behavior, there might be possible motivational confounds that affect the measurement of trust and trustworthiness, like selfish or altruistic tendencies, efficiency reasons, or prior personal preferences (like inequity aversion). To address this problem, the authors suggest to take as a measure the difference between the transfers in the trust game and those in a game called the Dictator Game (i.e. a game where the proposer’s decisions are implemented without the possibility for the responder to do something).

Then, the question of risk attitudes of the agents is addressed. Trust involves a certain risk (given that the trustor cannot monitor the response of the trustee) so the aptitudes towards risk may affect the monetary transfers. The evidence is mixed, with early studies (like Houser et al., 2010) finding no relationship between risk attitudes and trust and with others finding a correlation. The lack of agreement might be due to the concept of risk involved in the trust games themselves, which is not a pure financial risk, but a betrayal aversion, i.e. the risk and fear of being betrayed by another human being. Taking into account this, the authors mention the study of Bohnet and Zeckhauser (2004), where a “betrayal aversion” was found in the decisions of the agents, different from the standard risk aversion. Therefore, to disentangle the trust component and the risk component of the agents’ transfers, standard measures of risk might not fit properly. The authors, however, criticize also the use of game variants to address this issue, since new measures may capture other undesired effects.

Another problem can arise when there are changes in the parameters, implementation and description of the trust game: the response of the agents might not be consistent in all context, thus creating an impossibility of comparability between different ex- periments. For instance, increasing the multiplier K will likely increase the trustor’s transfer and also the fraction returned by the trustee according to Lenton and Mosley (2011). Moreover, also the way that the game is framed can have an impact: Burn- ham et al. (2000) show that the responses of the agents involved depend on whether, in the instructions of the game, the other agent was called partner or opponent. In the former case, the trustor trusted more the trustee than in the second case. How- ever, if the game is not framed at all the participants might create their own frame, thus interpreting the play in different and unpredictable ways, conducting to biased results.

3.2 Surveys

Another possible measure of trust relies in the use of surveys. The most import example is the General Social Survey (GSS) of the U.S. National Opinion Research Center. The question asked is: “Generally speaking, would you say that most people can be trusted or that you can’t be too careful in dealing with people?”. The possible answers are: “Most people can be trusted” or “Can’t be too careful” or “I don’t know”. This question is used also in other important surveys, like the EVS (European Values Survey), the WVS (World values Survey), the BHPS (British household panel study) and the ANES (American National Election studies).

This method is not immune from problems. For example, the interpretation of each individual might play a role in the response, as seen for the Trust Game. Moreover, the relationship between these two methods must be taken into account. Ideally, if both were valid and consistent, the responses should be highly correlated. However, the evidence is mixed. Glaeser et al. (2000) find no correlation between the two measures while Fehr et al. (2002) find evidences of the contrary. An explanation could be that surveys test a general propensity to trust, while Trust Games measure a specific strategic situation of the agents’ behavior. The concept of trust is therefore not uniquely determined and different methodologies might capture different aspects of this complex human attitude.

Moreover, the authors suggest that, if surveys are used as a measure, one must take into account various controls (like culture, geography and age) to interpret and there- fore compare the responses.

3.3 Neuroscience

The new frontier in the measurement of trust is represented by neuroscience, which tries to give more objective and biological methods.

Firstly, the relationship between of oxytocin (OT) and trust is investigated, in par- ticular to link OT levels with the behavior in the Trust Game. Zak et al. (2005) find that OT levels can predict trustee trustworthiness but not trustors’ transfers. However, when the change in OT levels in endogenous (i.e. natural, like in the paper mentioned above), the studies cannot establish causality. Hence, another set of stud- ies, where the level of OT was exogenously determined, is examined. Kosfeld et al. (2005) find that the treatment group in their experiment (i.e. the people who OT was administered) presents larger trustors’ transfers compared to the control group, but no significant differences in the trustees’ transfers. Moreover, their results suggest that OT causally increases trust through a reduction of betrayal aversion and it does not increase risk-taking behavior or prosocial aptitudes in general. The two methods of investigation leads therefore to inconsistent result with each other. Therefore, no conclusion can be reached: the relationship of OT with trust and trustworthiness is not simple as previously thought.

Finally, the authors introduce the latest study about the use of brain imaging to understand where trust come from and how it forms. This might be useful to develop more reliable measures of trust in the future.

4. Trust as a source of comparative advantage

Cingano and Pinotti (2016) study the effect of trust on firm organization and on comparative advantage. The authors argue that interpersonal trust means more dele- gation of decisions within a firm, resulting in a larger firm size and in the expansion of more productive units. If trust is established, it is possible to expand the firm outside familiar and friendly relationships, thus using the firm own productivity advantage over a larger amount of input, given that the firm is bigger and has more factors of production. The principal-agent problem (that comes with delegation and prevent higher level of it) can be partially solved by this human device. In particular, higher delegation causes higher productivity through:

  • higher exploitation of the informational advantage of the managers and of spe- cific skills of some workers.
  • the reduction of information costs.
  • more resiliency and ability to cope with changes in profit and growth opportu-

nities.

Studying a sample of Italian and European companies, the authors find that trust, together with human capital and intangible intensity, is associated with greater dele- gation, which, in turn, is associated with larger firm size. Their findings suggest that high-trust countries present an higher value added per worker and higher exports in industry where delegation is needed, thus making trust a source of comparative advantage in trade patterns. This effect is the result of a reduction of smaller size firms towards bigger size firms.

The authors test their hypotheses through an empirical data obtained with surveys. They retrieve data from:

  • The INVIND survey from the Bank of Italy, which provides information about inputs, outputs, internal organization and governance of a sample of more than 6500 firms. These data are used to test trust differences across Italian regions.
  • The World Values Survey (WVS) and the European Social Survey (ESS) to measure interpersonal trust and delegation.
  • The OECD Structural Analysis Database (STAN) and the OECD Business De- mographic Statistics, which provide information about value added per worker, organization and number of workers of European firms.

The analysis starts from the following regression:

Where $Y_{jr}$ is industry specialization (measured through value added per worker or exports), $Trust_r$ is the average level of trust, $Delegation_j$ is a measure of the need of delegation in each industry and $X_{jr}$, $\mu_r$ and $\mu_j$ are controls respectively for other determinants of specialization and geographical factors.

Then, the authors estimate $Delegation_j$ through the following regression:

Where $Centers_{jr}$ is the number of responsibility centers (which is a measure of delegation inside firms), $lnL_{ijr}$ is the log of the number of workers (which is kept fixed) and $\mathit{f_j}$ and $\mathit{f_r}$ are firm's controls.

In particular, the analysis show that, for the Italian sample, higher trust leads to an increase in the production of delegation intensive industries. Starting with the log of value added per worker as the dependent variable, the authors add a series of controls. Introducing human capital, the calculations show that it remain the main source of the pattern of specialization but, despite being correlated with delegation (which in turn has an effect on trust), the latter variable remain statistically significant. Then, two other controls are introduced: financial development and judicial quality. How- ever, they do not affect the coefficient of trust, thus making the estimation more robust and consistent. The results are similar when the dependent variable is export. For the international sample, the analysis is more complicated because different coun- tries present different institutional dimensions, like labor market regulations and prop- erty protections. The results, however, are very similar, making their thesis consistent also at the international level.

5. Trust and the stock market

Guiso et al. (2008) study the effect of trust on stock market participation across individuals and across countries. Starting from Gambetta (2000), they define trust as “the subjective probability individuals attribute to the possibility of being cheated” (Guiso et al., 2008, p. 2557), which depends on the characteristics of the financial system and the individual priors and predisposition to trust.

Firstly, they develop a theoretical model in which they reproduce the effect of trust on portfolio decisions, starting with a two asset model (one safe asset and one stock). They assume that investors know the distribution of returns but they are worried, with a level of subjective probability p, about other bed events, like the possibility of fraud perpetrated by their broker, which will lead to 0 return in the stock. They also assume 0 participation cost. Given a level W of wealth, being r ̃ the return in the stock investment and rf the risk free rate, each agents choose a share α of their wealth to invest in the risky asset so that they can maximize their expected utility

They also calculate that a risk averse individual will invest in the stock market if their subjective probability p \textgreater\ $\bar{p}$, where $\bar{p}$ is $\bar{p}$ = ($\bar{r}$ - $r_f$)/$\bar{r}$ and $\bar{r}$ is the mean of the true distribution of the returns of the stock. This last relationship comes from the fact that an investor invests in a risky asset if the expected return of investing is higher than the risk free rate, i.e. (1 - \textit{p}) $\times$ $\bar{r}$ + \textit{p} $\times$ 0 .

An important result of this model is that the decision to participate or not in the stock market depends on the subjective probability p of being cheated (since it reduces the expected return of the investment) and it does not depend on the level of W. Since W is not significantly correlated with trust (as calculated through the survey data they use in their empirical analysis), this can explain why also the wealthy might not engage in stock trading. Moreover, $\alpha$ depends itself on the level of trust: more trust means more wealth invested in risky assets and vice-versa.

Then, participation costs are introduced in the theoretical model. To enter the mar- ket, the investor now has to pay a fixed cost f (thus reducing the allocable wealth to W - f ). As f increases, in order to invest in stock, an higher level of trust is nec- essary ($\bar{p}$ decreases). In particular, less trust reduces the return on stock investment (thus making the participation less attractive) because it reduces the share of wealth invested in stock and it reduces the expected utility from participating.

Finally, the authors demonstrate that risk tolerance and trust are two different things by looking at the optimal amount of stocks: this number increases with trust and it increases also with risk aversion (for the benefits of diversification). Therefore, since risk tolerance reduces the optimal number of stocks and the contrary is true for trust, the latter cannot be a proxy of the former. As the empirical analysis will demonstrate, this is consistent with the data. This result is also reinforced by the fact that the authors find that individuals with high level of trust buy more insurance, while risk tolerant individuals buy less.

The authors use survey data to test their model. In particular, they employ the DNB Household Survey (at which they have directly contributed), which maps about 1990 individuals and tries to capture their level of generalized trust, their risk and ambi- guity aversion and their optimism. It also reports some statistics about households’ assets, distinguishing in particular between listed and unlisted stocks and securities held directly or through financial intermediaries. To measure generalized trust, this survey uses the same question of the World Values Survey (see section 3.2 for ex- planation); to measure risk aversion and ambiguity aversion, the authors ask the interviewed their willingness to pay for some lotteries; to measure optimism, they ask to quantify their agreement (on a scale from 1 to 5) with the following statement: “I expect more good things to happen to me than bad things”. Then, the Italian Bank costumers survey is used to capture the personalized trust, i.e. the trust that an individual has towards its financial intermediary, which could be different from the general propensity to trust. This data set contains information about the financial assets the interviewed have and their demographic characteristics. More importantly, to measure personalize trust, the survey ask the following question: “How much do you trust your bank official or broker as financial advisor for your investment deci- sions?”. The empirical analysis confirms their hypothesis. Starting from the study of the relationship between generalized trust (i.e the level of trust measured in the survey) on stock market participation, the authors find that trust has a positive and highly significant coefficient (so more trust means more participation), even after controlling for a number of variables (like age, sex and wealth). In particular, ”Trusting others increases the probability of direct participation in the stock market by 6.5 percentage points” (Guiso et al., 2008, p. 2578). Risk aversion and ambiguity aversion do not seem significant, as well as optimism, since the coefficient of trust remains unchanged. Moreover, when studying the effect of wealth, the authors find that the coefficient of trust remain significant even after controlling for this variable, thus providing a proof for their previous statement: the lack of trust may be an explanation for the fact that rich people do not invest in stocks even though they should not be affected by the participation costs. Then, the relationship between trust and the amount invested in risky asset is studied. The result confirm, again, the hypothesis: ”Individuals who trust have a 3.4 percentage points higher share in stocks, or about 15.5% of the sample mean” (Guiso et al., 2008, p. 2580). The same results hold for risky assets in general: risk and ambiguity aversion are not statistically significant also in this case. However, a significant control is represented by the level of education. The authors find that trust increases the holding of risky securities for everyone, but less in more educated people, since they know better how the market works respect to the less educated and they are less affected by priors and cultural stereotypes.

Considering now the Italian Banks costumers survey, the results confirm the previous ones: trust in one’s own financial intermediary increases the probability of investing in stock and the share of the wealth allocated in this type of security.

Finally, the authors investigate the implication of the level of trust on market par- ticipation across countries. The analysis is based on the following statement: less trust should mean that agents are less willing to invest and, in turn, firms will be less willing to float their equity given that it is less rewarding. Therefore, countries with lower levels of trust should have lower participation in the market. The empirical analysis confirms the previous claims: trust has a positive and significant effect on stock ownership among individuals and it has also a positive effect of stock market capitalization.

6. Money as a substitute of trust

Gale (1978) develops a theoretical model to study the effect of the introduction of money in an economy characterized by a lack of trust between its agents. The authors starts from the Arrow-Debreu model of Walrasian equilibrium. This model is charac- terized by a finite number of consumers (who have an initial endowment of resources) and commodities, perfect competitions in all markets and constant return to scale.

Moreover, the markets are complete, so it means that all transaction in the economy can be arranged at one time. This is made possible because transactions that involve the delivery of a commodity in a different time period (i.e in t=0 a commodity is sold but the delivery will be arranged in t=1) can be concluded through contracts at time t=0. The contract specifies that the delivery will occur in t=1, even though the transaction itself is completed in t=0. Therefore, the contract is seen as the com- modity being traded. This mechanism operates under the assumption that there is no uncertainty in the market. In such an environment, agents can trust each other to fulfill the contracts they have agreed upon. As a result, there is no need to dis- tinguish between the contracts and their execution. Nevertheless, if for some reasons agents start not to trust each other, and therefore uncertainty arises, some agents may prefer not to fulfill their promise and other agents, anticipating that, might not engage in a transaction in the very first place. If trust were to vanish, therefore, the allocation process would break down if no other substitutes were found. The scholar demonstrates that money can be a substitute of trust and it can permit the allocation and redistribution of resources even in the absence of trust.

To illustrate that formally, the author employs the concept of core that is “the set of attainable allocations such that (a) neither agent can make himself better off by remaining self-sufficient and(b) two agents cannot both be made better off by any feasible redistribution of their joint endowment.” (Gale, 1978, p. 459), through which he develops the concept of sequential core to integrate time periods and uncertainty about the outcome of a contract. An allocation of commodities is trustworthy if the sequential core applies to it, that is if it cannot be improved by any redistribution of resources in any time period. If this were not the case, an agent would have the incentive to break the contract in later periods. Therefore, any exchange of commodities without trust would not form a sequential core, because agents would have the incentive to deviate from equilibrium to increase their own utility.

To resolve this issue, the author introduces money in the model. In particular, each agent is given an endowment of money at time t=0 and it is assumed that at the end of time t=1 (the second and last period) the same amount of money must be returned as a tax. Implicitly, the model introduces a social institution (for example a government) that issues fiat money, which has no intrinsic value but it guaranteed by the imposition of the government itself (this is the case in modern economies). In between the two periods, the agents can exchange money among themselves. This solves the issue: the agents who were reluctant to keep their promises in the model without money and without trust now have an incentive to fulfill the contract, given that they need the money to pay their taxes. Money do not restore trust among agents, but they act as a substitute, a way to enforce previous contracts and agreements. The possibility for the government to directly intervene in the fulfillment of contracts should be discarded, since it is not plausible that a human institution could be so almighty that it can oversee every transaction in a complex economy. Therefore, money can create the conditions for trustworthy transactions (without trust) in a decentralized way. However, the institution must be able to credibly impose the payment of taxes, otherwise agents would face the same problem as before. To do that, penalties for those who do not want to pay taxes should be sufficiently gruesome, but the author does not quantify the penalty. Moreover, the author argues that, despite money can substitute trust, there could be a loss in overall utility respect to the case with trust. In the model, the social institution is introduced without any explicit cost, but this is unlikely to be the case in reality, since introducing a government that is able to enforce tax payment and issue securities is certainly not free.

6.1 The gruesome penalty

Grimes (1990) continues the work of Gale (1978) in analyzing the role of money in the same theoretical framework studied by the previous author. The results of Gale are confirmed: without money, the outcome of an economy without trust is autarky, since no transaction can effectively occur. With the introduction of money, however, it is possible to replicate the allocation of the economy with trust. The contribution of his work respect to the research of his predecessor is about a quantification of the grue- some penalty that agents face when they do not respect their tax obligations.

In particular, the author shows that the simple introduction of money does not nec- essarily replicate the outcomes of an economy without trust, because a sufficient incentive (i.e. a penalty higher than a certain threshold) to make inefficient for the agents not to fulfill their promises must be introduced. Under this threshold, the increase in utility derived from reneging the contract his higher than the reduction in utility due to the penalty. Therefore, the optimal choice is not to fulfill the agree- ment. On the contrary, above that threshold, the optimal choice is to fulfill the contracts (therefore replicating the allocation with trust). It is worth noting that the intensity of the penalty has no effect on the final allocation of goods, since they are already Pareto-efficiently allocated, but the author shows that this has an impact on prices.

To calculate the threshold, the author considers a world with two agents, two periods, no uncertainty and one good in each period. Each agent's endowment in each period is defined as $(1-\lambda, \lambda)$ and $(\lambda, 1- \lambda)$, where $\lambda$ is a small positive number. The other features of this world are the same described in the previous paragraph. His calculations show that, to replicate trust, the maximum penalty should be:

Where $q(0)$ is the maximum penalty and $\lambda$ is the small positive number described above.

7. Trusting an institution

Meylahn et al. (2023) study the dynamics of the trust between individuals and institutions using a stylized model of social network learning. Firstly, the authors define a model to describe the relationship between only one individual and the institution, in which the agent has repeated opportunities to place trust. The institution's behavior is modeled by a parameter $\theta$ that represents its trustworthiness, i.e. the probability that the institution honors the trust placed by the individual. So, in each round the institution honors the trust that has been placed by the agent with probability $\theta$ and abuses it with probability $1 - \theta$. Similarly, the agent, in each round, can decide whether or not placing trust in the institution. The decisions taken by the two are independent in each round and the agent observes the actions of the institution only when he places trust. If trust is honored, he gains r, while if it is abused he looses $-c$. Therefore, its expected utility is $r\theta - c(1-\theta)$. The agent behave with myopic rationality, so he maximizes the expected utility in each round without taking into consideration the future rounds. Moreover, the agent starts the interaction with the institution having a prior belief P0, which is a function of $\alpha$ and $\beta$, which can be considered the number of times trust was honored and betrayed in a past setting, before the beginning of the experiment. The variables of interest are $\tau$, the number of rounds after which the agent decide not to place trust anymore, through which determining the probability of quitting, and q, the expected time spent playing before quitting. In each round, the agent updates its knowledge by taking into consideration the actions taken by the institution and, therefore, he updates its estimation of $\theta$. If the agent quits, he will never trust the institution again, given that there is no possibility to update his estimation of the trustworthiness of the institution.

Then, the authors define another model where another agent is added: the relation- ship between the two plays an important role in determining the relationship with the institution. The agents’ behavior and the institution’s behavior share the same characteristics as the model with one agent: the agents choose in each round whether to place trust or not, they have a prior belief and the institution decide whether to honor or betray the agents’ trust. The authors further assume that both agents share the same prior. The key feature of this model is that each agent, in each round, receives information from the other, through which he can update his information. Two cases are analyzed:

  • Agent fully communicate with each other the interactions they have with the institution. Given that the agents have the same prior and the same information available, they will have the same estimate of $\theta$.
  • Agents do not communicate explicitly, but they only observe the actions of the other agent. Therefore, the information received from the other agent will be incorporated only a round later.

They run their model 4000 times for the single agent model and 2000 times for the dual agent models, for a maximum of 500 rounds. They find that the probability of quitting in most of the settings (i.e. in various calibrations of the parameters) is higher in the single agent model. When considering only the two agents model, the probability is higher when the agents can only observe the actions of the other but they are not able to fully communicate. However, there are some exceptions and in some simulations the observable actions setting outperforms the full communication model, thus having a lower probability of quitting. The expected time to quit is lower in the two agents model respect to the case where there is one agent only, in particular in the model when they fully communicate (in which therefore they receive more information). This is due to the fact that having more information will make their estimations more precise: they either quit quickly or they do not, since they need less time to have a good estimation of $\theta$ and, if the estimation is not high enough, they will quit after fewer round, otherwise they are likely to place trust indefinitely.

Overall, the authors find that communication is always helpful since it increases the probability of continuing to trust a reliable institution and decrease the expected time of quitting an untrustworthy institution. Moreover, they find that more optimistic priors increase the possibility of trusting a trustworthy institution. Finally, they highlight that it is not possible to say which of the two agents model is better, since it depends on the parameters setting and which criterion taking into consideration.

8. Trust and the blockchain

As highlighted before, trust, with its dynamics, is fundamental in every aspect of a society and it is what permits societies in themselves to evolve and transform. Without trust, each individual would have the burden of verifying the reliably of every other agent he encounters, which would be impossible. Trust is also what permitted the birth of modern finance, with the Buttonwood agreements of 1792 that led to the creation of the stock market. In recent years, however, trust within modern societies is decreasing, putting at risk the way the society in itself operates. People not only do not trust each other anymore, but they also do not trust the government, or the media, or any other authority that once was considered credible and reliable. It is in this framework that “a new architecture of trust” was developed, leading to the birth of bitcoin and the blockchain technology in 2009. Werbach (2018) analyzes the relationship between trust and the blockchain in his book “The blockchain and the new architecture of trust".

8.1 What are the blockchain and Bitcoin

The blockchain is a distributed and decentralized digital ledger (i.e. a record of accounts) that records transactions across a network of computers in a secure, trans- parent, and tamper-proof manner. In a blockchain, transactions are grouped into blocks, which are linked together in a chronological and linear order, forming a chain of blocks. Each block contains a list of transactions, a timestamp, and a reference to the previous block in the chain, creating a verifiable record of all transactions that have ever occurred on the network. One of the key features of a blockchain is its consensus mechanism, which ensures that all participants in the network agree on the state of the ledger. Once a block is added to the blockchain, it is consid- ered immutable, meaning that the data in the block cannot be altered or deleted without the consensus of the majority of the network. This makes blockchains se- cure and resistant to tampering or manipulation. The transactions registered on the blockchain are performed through smart contracts, which are pieces of codes that execute a predetermined function, like transferring a bitcoin, with no possibility to alter the agreement. Finally, a cryptocurrency is a digital currency that runs on the blockchain network.

Bitcoin, introduced by Nakamoto (2009), was the first digital currency and the first example of the blockchain. It relies on 3 elements: cryptography, digital cash and distributed systems. Cryptography can be considered as the science of secure commu- nications and it is employed for this purpose in the blockchain technology. Each agent that interacts with Bitcoin is identified with a private key associated with a public key thorough the mechanism of cryptography, so that each transaction can be verified and associated with an user without the need to disclose his private key. What is called coin is in reality a chain of signatures of verified transaction. Bitcoin comes from the unspent output of previous transactions, all register on the blockchain. Each transaction is verified by a a network of nodes (i.e. a participant in a distributed net- work that maintains a copy of the blockchain ledger and participates in the consensus process). All the agents need to trust the state of the ledger: this is achieved by the consensus mechanism. Consensus comes from a process called mining, in which agents compete to verify the transactions and create a new block of the blockchain, in exchange for a reward (transaction fees and newly mined bitcoins). The winner is randomly decided, but all the other agent verify independently that the new block is legitimate. Being untrustworthy is not profitable: mining is an expensive activity, because miners engage in a proof of work system, where they have to solve a crypto- graphic puzzle to have the right to validate the transaction. This requires energy and money and the more energy and money a gent put in mining, the more chances he will have to win. The benefits of cheating are much lower than the costs, so in this way each agent can trust the state of the ledger because there are no incentives to deviate. Finally, the consensus mechanism has also the objective to make the ledger immutable because each transaction is recorded from the hash of the previous block. Changing a past block would mean forking the chain, and this would be rejected by the majority of users. Only in the case that an agent has more than 50% of the computing power (which is almost impossible) this change would be viable.

8.2 A new form of trust

The innovation of the blockchain is connected to the fact that every participants can trust the information recorded on the ledger without necessarily trusting another agent to validate it. There is no need for a central authority to validate the trans- actions and trust is reinforced by the fact that there are mechanisms that make im- possible to alter the transactions already recorded on the ledger. The idea of Satoshi Nakamoto was to design a system that, through incentives, made the needs and ob- jectives of every participant aligned with each other, so that what is recorded on the ledger can be trusted without trusting (or knowing) the other agents. Nakamoto claimed to have eliminated the need for trust but, according to Werbach (2018), that would be impossible. What Nakamoto created is trust in “a new architecture of trust”, where independent agents run this technology, validating the transactions so that they can be recorded on the ledger. This is reinforced by the fact that distributed ledgers networks make people work together is a way that otherwise would not have been possible since they would not trust each other sufficiently.

To better understand what he means by a “new architecture”, the author firstly outline the various architectures (which define as ”the ways the components of a system interact with one another” (Werbach, 2018, p. 25) ) of trust that humans have developed over time. The main architectures are:

  • Peer to peer (P2P): here, trust is based on a face to face relationship that arise because the agents share ethical norms and mutual commitment. The downside of this architecture is that is this possible for only few people and small communities, given that the knowledge of each other is pivotal in creating trust.
  • Leviathan: this vision start from the belief that humans are not fully trustable and therefore a powerful third party, the state/government, is needed to enforce private contracts and property rights. This is achieved through the monopoly of violence held by the state: people can now trust each other because, if some- thing goes wrong, the leviathan can punish the guilty and enforce previous commitments.
  • Intermediaries: transactions are guaranteed by a third party (different from the government), which is trusted to perform certain actions. They create the possibility to perform certain transactions that in a peer to peer network would have been difficult: the other agent is trusted because there is an intermediary that makes the transaction happen. Example are e-commerce platforms such as Amazon, or financial services companies.

The new architecture of trust created by the blockchain is defined as a “trustless trust”. Without trust it would fail since no engagement between individuals is possible without a form of trust, but if it relied on old trust structures it would not be a revolution and would fail its primary object. On the blockchain network, no agent is assumed to be trustworthy, but the output of the network is. Generally speaking, in every transaction, the counterpart, the intermediary and the dispute resolution mechanism must be trusted, but the blockchain substitutes these elements with code. There is no possibility to assess the other party trustworthiness, since all agents are represented by private\public keys in the network which allow for their anonymity; there is no central intermediary, since the platform is a distributed machine operated by all the participants; the disputes are solved through pieces of codes called smart contracts, that perform a certain action with no possibility to stop them. Transactions are verified through cryptographic proofs that other agents can verify mathematically. Therefore, it is not possible to frame this system within the common architectures: it is not a P2P since the other parties are unknown, there is no central authority and also there is no central intermediary since the platform is operated in a decentralized way. Each agent needs to trust the network and not each agent with whom he is engaging in a transaction. The blockchain (and Bitcoin) seems the perfect solution for the lack of trust of the modern society and for the problems that the previous architectures of trust presented. The fact that Bitcoin was born after the Great Financial Crisis is not random. P2P relationships were not sufficient in a world so deeply interconnected, intermediaries were considered the cause of the crisis itself and the Leviathan, i.e. the government, was not able to foresee the crisis and prevent it.

Blockchain trust relies also on the immutability of the information recorded, through the mechanisms beforehand explained. However, immutability must be understood in a probabilistic way. The more blocks are added, the more the previous transactions will be immutable because it would require an infinite amount of power to alter the transactions. Each agent can decide after how long they trust the state of the ledger. Therefore, blockchain trust is not instantaneous. Moreover, the transparency of the ledger, meaning that the record of every transaction is publicly available and the software itself through which the blockchain operates is open source, is an important characteristic that increases trust. Finally, blockchain’s trust is algorithmic, meaning that it relies on algorithms to maintain the system: what must be trusted are not the people operating on it, but the software and the math behind the consensus process.

Satoshi’s error was to believe that in his architecture trust was absent, while in reality it reduced the need of trusting some part of the system. Trust is needed and the blockchain could not function without it. Firstly, engaging in a transaction in a system without central control and with immutability means that no one is able to oversee the transaction and amend it if something is wrong. Agents can be confident that the transaction will be correctly registered, but a distributed ledger will not be able to verify if the content is legitimate and, if something is wrong with the transaction itself, there is no possibility to reverse it: smart contracts are unstoppable. Moreover, humans are not out of the system entirely, which means that error and misunderstandings can occur. And the cryptographic techniques are still vulnerable to attacks: they may be difficult to perform, but users that engage with a blockchain need to trust that this will not happen.

The author argues that the success of the blockchain as an architecture of trust will depend on its governance. The blockchain is a way to enforce some rules, but it is also a product of some rules designed by humans, which therefore would need a governance to continue to operate and to decide the next rules of the game. Moreover, law should regulate the blockchain framework: without legal rules, the blockchain could be used as an instrument by criminals and terrorists (for recycling money for example), and this would reduce the trust that normal people put in this system. Crypto enthusiasts argue that the role of law would be replaced by smart contracts, but codes cannot fully formulate human intentions, which are an important part behind private contracts, and this could create misunderstandings between the parties. Law can intervene where smart contracts are not able to. Finally, also regulation can play an important role in developing the future of the blockchain and foster its trustworthiness, as it does with other financial instruments and institutions.

8.3 Trust and the blockchain in practice

Some scholars have started to think about how trust between users can be enhanced in real blockchain applications. You et al. (2022) find the main challenge to be the fact that there is no consensus about how to measure trust in the blockchain environment. Therefore, they develop a framework to do that, creating a system based on subjective ratings of trustworthiness. The authors start by identifying six different blockchain pplication, considering which factors can be used to measure trust in each specific domain. Identifying the key factors behind trustworthiness is essential for creating a system to enhance trust. In particular:

  • Supply chain: it is possible to measure how trustworthy the supplier is by the average order arrival time and the defect rate, and how trustworthy is the buyer by the number of days for payment.
  • Healthcare industry: to assess the trustworthiness of these firms, regulatory compliance proof, claim approval rate, drug prescription regularity can be the starting point.
  • E-commerce: to assess the trustworthiness of those firms, the accuracy of ratings provided by the users and the security of payments represent the most important features.
  • IoT devices: system security data and reliability of the data provided by these devices are the most important features.
  • Finance: pivotal factors are the security of transactions and data and the effi- ciency and quality of the communications.
  • Social media: news and reputation credit represent the most important charac- teristics to assess trustworthiness.

The problem of the blockchain is that, although the information recorded cannot be modified easily, the data may not always be true: the need for accountability arises because of this fact.

The system presented by the authors is based on trust scores given by agents that interacts with other agents on the blockchain applications. Initially, there would be no score, since no transaction has occurred yet. Then, the two parties start to interact and they begin to collect trust factors about each other. The specific factors, which are described above, will depend on which application is under consideration. Then, each actor will give its score, which will be recorded on the blockchain and will be available for other users, who are now better informed regarding the other users of the blockchain application and can decide to interact with them or not. The validity of the scores will be ensured by the fact that each user will have followed the KYC validation procedures before interacting on the application and it will possible to identify the particular participant from the outside through verifiable credentials. Therefore, no rating will be anonymous.

This system may increase the trust between users because they are incentivized to adhere to the common organizational norms of each sector, because otherwise they would damage their reputation by having a low score permanently recorded on the blockchain. Therefore, this model may create a set of incentives to align the two sides of each transaction.

9. Trust games in the blockchain

As explained before, the blockchain system employs game theory and incentives to make the agents act honestly on the network. After the work of Satoshi Nakamoto, several papers were developed to study the incentives structures and the games behind the blockchain and its consensus mechanism.

9.1

Breiki (2022) studies how trust among players evolve over time when they perform trust evolution games. To do that, he defines the features of its abstract game. Firstly, the author defines the parameters of the model: there are various miners and each of them has the possibility to cooperate (acting honestly) or defect (cheating); there is a vector of probabilities that define the likelihood of each player to succeed in solving the puzzle, which is proportional to their computational power; there are the costs and rewards of mining, taking into account also the propagation delay (i.e. the time needed to validate a transaction); there is the market value. Moreover, the author uses two learning algorithms: fictitious play, where prior believes are defined; satisficing learning, where aspiration level of payoff and leaning rates are defined. All in all, the author finds that the player learn to cooperate in the game to get a better payoff and, for satisfice players, lower learning rates increased the final payoff.

9.2

J. Zhang and Wu (2021) study evolutionary game theory applied on the blockchain network to understand the strategies and incentives of the participants and their co- operative behavior. The authors explain that the blockchain is a perfect environment for evolutionary game theory because:

  • There is information symmetry, since all individual have and share the same information on the network and each participant have complete transactions data.
  • All the participants are equal so no party has a dominant advantage when the game begins.
  • Participants are prone to trust each other and engage in the game because of the cryptographic mechanisms, which make the environment credible and immutable.
  • The process of adding new blocks can be seen as a form of repeated games.

Agents have bounded rationality, since they cannot get global information because the network is complex, and therefore they are not fully able to maximize their payoffs. Each participant can have two possible behaviors, cooperation or defection, and they update their strategy considering the maximum payoff. Indeed, during the generation of new blocks, each agent is able to learn from its actions and the actions of the winners.

The model developed comprises two groups of miners: group A, with the inclination for cooperation, and group B, inclined to cheating. Participating in both groups has a cost Ca and Cb and each game bring a revenue R, which will be rewarded to the participants. Each group will have different benefits (for group A, transactions fees and mining rewards, for group B illegal revenue). Finally, there are also punitive measures, denominated P.

Each player, in both groups, can decide which strategy to adopt. Their payoff are: \begin{align*} E_{h,a} &= y(K_a + \lambda R - C_a) + (1-y)(K_a - C_{a}) \\ E_{m,a} &= y(K_a - C_a - P) + (1-y)(K_a - P) \\ E_{h,b} &= x(K_b + (1-\lambda)R - C_{b}) + (1-x)(K_b - C_b - P) \\ E_{m,b} &= x(K_b - C_{b}) + (1-x)(K_b - P) \end{align*}

Where y and x are the probabilities of winning the game, h means the honest strategy, m means the dishonest strategy and λ is the portion of R won.

The authors assume that, at the beginning, each participant has a decided strategy to start with. The goal is to examine the change in population of honest and dishonest after several rounds, taking also into account changes in the parameters of the game. The authors argue that, unlike classic evolution games, the relationship among agents in the blockchain is random. Moreover, the size of the network matters: with small networks, the emergence of cooperative behaviors is easier. Finally, a definition of the evolutionary stable strategy (ESS) is given. The ESS is “a strategy that other strategies cannot invade” (J. Zhang & Wu, 2021, p. 5).

The authors runs various simulation. Firstly, 67% of group A are honest agents and 20% of group B are betrayers. Here, the honest strategy is an ESS, since the number of betrayers tends to 0 as the rounds increase. However, as the expected payoff of group B augment, the honest strategy is still an ESS but weak, because higher payoffs with the dishonest strategy tend to tempt agents to cheat. Therefore, future revenue expectations influence the behavior of participants in the blockchain. Then, the influence of the network structure is analyzed within the group A population. They use a Watts-Strogatz (WS) small-world model and a Barabasi-Albert (BA) scale-free network model. The authors find that it takes quite lots of rounds for honest agents to establish a trusting cooperative relationship and this may represent an opportunity for cheating agents in the blockchain. Therefore, security is a relevant topic expecially in the initial stage of the blockchain.

9.3

L. Zhang and Tian (2023) develop a Byzantine consensus protocol (i.e. a consensus protocol where there are faulty and malicious agents) on the blockchain, shaping it as a dynamic game. Their main contribution to the current literature relies in the fact that the agents in their model have bounded rationality and can learn from the historical observations. In particular, this means that the participants can choose among a limited set or strategy (honest or dishonest), they are able to learn from historical observations and they choose their strategy accordingly, taking also into account the current state of information, but they are not able to forecast the future. Moreover, they are allowed to have inconsistent subjective believes about the probability of meeting agents with their same strategy: each agent believes that, for a portion m of rounds, he will meet a proposer with the same strategy, ranging from m=1 (meaning that he and the proposer will always have the same strategy in every round) to m=0 (meaning that he and the proposer will have the same strategy only by chance). Their model, based on a BFT (Byzantine Fault Tolerance) consensus protocol, consists in the following features:

  • The agents, before the game, are selected to form different parallel committees,

which compete in a n round mining game and which will not change until the end of the game.

  • Each agent has one vote in the mining game.
  • In each round, an agent is randomly selected to make a proposal about the validity of a block. The other agents become validators and they vote if the block in the proposal is valid. The block will be validated if the number of votes is higher than v, a majority threshold.
  • There is a reward R for validating the block, a cost $c_{check}$ for verifying a transaction, a cost $c_{sent}$ for voting for a transaction and a penalty k that validators encounter if they misbehave.

Before each round, each validator check if their congeners (the other nodes in the network) have pivotality, i.e. the ability to control the consensus outcome because they have the majority. As specified before, the participant have two strategy: honest strategy, where miners achieve the consensus protocol and the Byzantine (dishonest) strategy, where miners damage the consensus protocol. The authors assume that participants are fixed and no one would quit. Initially, a number x1 of miners choose the honest strategy in the first turn.

The authors define the concept of stable equilibrium, which can be defined as the situation when $x_t=x_{t-1}$, so when the portion of miners which perform an honest strategy remains stable (so the number of agents that changes their strategy from honest to dishonest is equal to the number of agents that changes from dishonest to honest). There are 3 possible stable equilibria:

  • The honest stable equilibrium, where no agent is cheating, so $x_t=x_{t-1}=1$.
  • The Byzantine stable equilibrium, where all agents are cheating, so $x_t=x_{t-1}=0$.
  • The pooling stable equilibrium, where both strategy exists, so $x_t=x_{t-1} \in (0,1)$.

Each equilibrium can be reached depending on the number of initial cheaters/honest miners, their belief m, the cost-reward mechanism and the pivotality rate (i.e. the minimum percentage of nodes that must agree to reach consensus and add a block). They find that only the honest stable equilibrium can support the safety, the liveness (so the fact that all non faulty agents should have output) and the validity (the fact that all participants have the same valid output) of the blockchain. Moreover, they find that if the reward-punishment increases, the blockchain will become safer and the honest stable equilibrium will be easier to achieve, while if the cost punishment ratio increases, the safety and the liveness of the ledger are threatened and the honest equilibrium is more difficult to achieve. Finally, if the pivotality rate increases, every stable equilibria is harder to achieve.

10. Trust in algorithms

Algorithms are becoming more and more important in everyday life, from health care to criminal justice systems, contributing in decision making processes in many fields. Therefore, the natural question on whether it is possible to trust algorithms arises.

According to Spiegelhalter (2020), algorithm’ trustworthiness come from the claims made about the system (how its developers say that the system works and what it can do) and claims by the system (the algorithm’s responses and output about a specific case). Therefore, he proposes a model to assess and boost trustworthiness in algorithms. Regarding the first kind of claims, developers should clearly state what are the benefits and downside of using their algorithms. To assess that, the authors proposes an evaluation structure of 4 phases:

  • Digital testing: the algorithm accuracy should be tested on digital datasets.
  • Laboratory testing: the algorithm results should be compared with human ex- perts in this field. An independent committee should evaluate which response is better.
  • Field testing: the system should be tested on field, to decide whether is does more harm or good, considering also the effects that it can have on the overall population.
  • Routine use: if the algorithms passes the 3 previous phases, it should be moni- tored continuously, in order to solve problems that can eventually arise.

Having explicit positive evidence in all these phases would boost the trustworthiness of the claims made about the system by developers. Considering the second type of claims, to reach an higher degree of trustworthiness, it is necessary that the algorithms specifies the chain of reasoning behind its claims, which are the most important factor that led to its output and which is the uncertainty around the claim. Moreover, also a counterfactual analysis should be performed (i.e what would be the output If the input changed). Overall, the algorithms should be made clearer and more explainable and transparency can play an important role for that. To increase trustworthiness, an algorithm should be accessible and intelligible by people, it should be useable, so have an effective utility, and it should be assessable, so the process behind every claim should be available. Ultimately, it should show how it works. And more importantly, it should also clearly state its own limitations, so that trustworthiness do not become blind trust.

11. Conclusion

Trust plays a pivotal role in ensuring the existence and development of modern soci- eties. This paper provides a comprehensive summary of the current literature on how trust relates to the world of economics and finance. To begin with, the concept of trust is clearly defined and differentiated from other human sentiments such as cooperation and confidence. The notion of risk is also discussed, as a complete assessment of the other party’s actions is antithetical to a trust relationship. Furthermore, various methods for measuring trust are explored, with trust games being the most commonly used tool by researchers. The paper goes on to explain how trust is a source of com- parative advantage, which determines trade patterns. Additionally, trust is linked to stock market participation, with more trusting individuals being more likely to invest in risky assets and, conditional on participating, they allocate a larger portion of their wealth. While complete trust among individuals would potentially be ben- eficial, it is not possible in the real world. However, money can act as a substitute that replicates the allocations of a trustworthy economy. The paper also emphasizes the importance of trusting institutions, especially in a world where trust is lacking. Effective communication among individuals is critical in assessing the true trustwor- thiness of an institution. The paper then delves into the relationship between trust and blockchain technology, which can be seen as a new architecture of trust. More- over, various authors have developed blockchain trust games to better understand the best consensus mechanism process. Finally, the importance of trusting algorithms is highlighted, given their widespread use in everyday technology, healthcare, and the justice system.

References

  • Alo ́s-Ferrer, C., & Farolfi, F. (2019). Trust games and beyond. Frontiers in Neuro- science, 13.
  • Bajos, N., Spire, A., Silberzan, L., Sireyjol, A., Jusot, F., Meyer, L., Franck, J.-E., Warszawski, J., Bajos, N., Warszawski, J., Bagein, G., Counil, E., Jusot, F., Lydie, N., Martin, C., Meyer, L., Raynaud, P., Rouquette, A., . . . Spire, A. (2022). When lack of trust in the government and in scientists reinforces social inequalities in vaccination against covid-19. Frontiers in Public Health, 10.
  • Berg, J., Dickhaut, J., & McCabe, K. (1995). Trust, reciprocity, and social history. Games and Economic Behavior, 10(1), 122–142.
  • Bohnet, I., & Zeckhauser, R. (2004). Trust, risk and betrayal [Trust and Trustwor- thiness]. Journal of Economic Behavior Organization, 55 (4), 467–484. Breiki, H. (2022). Trust evolution game in blockchain. 2022 IEEE/ACS 19th Inter-

national Conference on Computer Systems and Applications (AICCSA), 1–4.

  • Burnham, T., McCabe, K., & Smith, V. L. (2000). Friend-or-foe intentionality priming

in an extensive form trust game. Journal of Economic Behavior Organization, 43(1), 57–73.

  • Cingano, F., & Pinotti, P. (2016). Trust, firm organization, and the pattern of com-

parative advantage. Journal of International Economics, 100, 1–13. Edelman. (2022). Edelman trust barometer 2022 [Accessed on April 22, 2023]. https: //www.edelman.com/sites/g/files/aatuss191/files/2022-01/2022%5C% 20Edelman%5C%20Trust%5C%20Barometer%5C%20FINAL Jan25.pdf

  • Fehr, E., Fischbacher, U., Rosenbladt, B., Schupp, J., & Wagner, G. (2002). A nation- wide laboratory: Examining trust and trustworthiness by integrating behav- ioral experiments into representative surveys. Schmoller’s Jahrbuch, 122.
  • Gale, D. (1978). The core of a monetary economy without trust. Journal of Economic Theory, 19(2), 456–491.
  • Gambetta, D. (2000). Can we trust trust? Trust: Making and Breaking Cooperative Relations, electronic edition, Department of Sociology, University of Oxford, 213–237.

Glaeser, E. L., Laibson, D. I., Scheinkman, J. A., & Soutter, C. L. (2000). Measuring Trust*. The Quarterly Journal of Economics, 115 (3), 811–846.

  • Grimes, A. (1990). Bargaining, trust and the role of money. The Scandinavian Journal of Economics, 92(4), 605–612.
  • Guiso, L., Sapienza, P., & Zingales, L. (2008). Trusting the stock market. Journal of Finance, 63, 2557–2600.
  • Houser, D., Schunk, D., & Winter, J. (2010). Distinguishing trust from risk: An anatomy of the investment game. Journal of Economic Behavior Organization, 74(1), 72–81.

Kosfeld, M., Heinrichs, M., Zak, P. J., Fischbacher, U., & Fehr, E. (2005). Oxytocin increases trust in humans. Nature, 435 (7042), 673–676.

  • Lenton, P., & Mosley, P. (2011). Incentivising trust. Journal of Economic Psychology, 32(5), 890–897.
  • Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. The Academy of Management Review, 20 (3), 709–734.
  • Meylahn, B. V., den Boer, A. V., & Mandjes, M. (2023). Trusting: Alone and together.
  • Nakamoto, S. (2009). Bitcoin: A peer-to-peer electronic cash system. Cryptography Mailing list at https://metzdowd.com.
  • Spiegelhalter, D. (2020). Should We Trust Algorithms? Harvard Data Science Review, 2 (1).
  • Warren, M. (2018). Trust and Democracy. In The Oxford Handbook of Social and Political Trust. Oxford University Press.
  • Werbach, K. (2018). The Blockchain and the New Architecture of Trust. The MIT Press.
  • You, S., Radivojevic, K., Nabrzyski, J., & Brenner, P. (2022). Trust in the context of blockchain applications. 2022 Fourth International Conference on Blockchain Computing and Applications (BCCA), 111–118.
  • Zak, P. J., Kurzban, R., & Matzner, W. T. (2005). Oxytocin is associated with human trustworthiness. Hormones and Behavior, 48(5), 522–527.
  • Zhang, J., & Wu, M. (2021). Cooperation mechanism in blockchain by evolutionary game theory. Complexity, vol. 2021.
  • Zhang, L., & Tian, X. (2023). On blockchain we cooperate: An evolutionary game perspective.