Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Trust Models (vitalik.ca)
150 points by feross on Aug 21, 2020 | hide | past | favorite | 30 comments


Whenever I read Vitalik's work, I find myself convinced that all of society and its various problems can be boiled down to incentives and their alignment or misalignment.


“Never, ever, think about something else when you should be thinking about the power of incentives.” — Charlie Munger, https://fs.blog/2017/10/bias-incentives-reinforcement/

Mechanism design is a subfield of game theory and economics called mechanism design that tackles this directly.

Instead of having a game and figuring out how it works and what is the outcome is when agents play it, the problem is to design a game that creates desired outcomes when selfish agents play it.

It's possible to design mechanisms where utilitarian social-choice function is the best choice even when players are selfish. The most famous result in the field is Vickrey–Clarke–Groves mechanism to achieve a socially-optimal solution in auctions. Quadratic voting and quadratic funding is another interesting mechanism that results good outcomes. Vitalik Buterin is involved with this too.


> design a game that creates desired outcomes when selfish agents play it.

Curious about if these ideas have been applied to constructing healthy effective companies? With happy employees

Or maybe one could say everyone that comes up with some KPI are doing this?


You may enjoy this podcast episode with him and Eric Weinstein, where he dives deeply into these topics: https://www.youtube.com/watch?v=8TwNNgiNZ7Y


Not related to this topic, but I recently watched Lex Fridman's podcast episode with him and it was really interesting as well: https://www.youtube.com/watch?v=3x1b_S6Qp2Q.


Yes, also a good one. I'd recommend pretty much all of Lex Fridman's and Eric Weinstein's podcast episodes in general. Lots of insightful conversations with (in my opinion) some of the most interesting people in the world.

Links for anyone curious:

Eric Weinstein's The Portal Podcast - https://www.youtube.com/playlist?list=PLq9jO8fmlPee9ezOraOHA...

Lex Fridman's Artificial Intelligence Podcast - https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuK...


These are all great links, thank you!


That's the basic premise that drives economic theory.


To me that's the basic point of game theory.


Any recommendations of other work of his to read?


I'm not OP, and the books that jumped to my mind when I read this are not blockchain-related.

They are related by questions of society, trust, and incentive design.

They seem largely aligned with Vitalik's assessment of the `0 of N`/`1 of N` assessment

Before clicking through to Goodreads, though, check out this book review of Seeing like a State by Scott Alexander:

https://slatestarcodex.com/2017/03/16/book-review-seeing-lik...

Next, check out both of these books:

- James C. Scott, Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed [0]

- Michael Huemer, The Problem of Political Authority: An Examination of the Right to Coerce and the Duty to Obey [1]

Neither book will disappoint!

[0]: https://www.goodreads.com/book/show/20186.Seeing_Like_a_Stat...

[1]: https://www.goodreads.com/book/show/15794037-the-problem-of-...

Edit: for clarity.


I heartily recommend Seeing Like a State to anyone interested in the development of the modern state, some of the largest misteps made by a slew of modernist authoritarian regimes (such as the Soviet Union), and an introduction to some very legitimate criticisms of modernism which postmodernism makes.

However, I want to caution anyone from thinking that this book says anything about Game Theory. I'm not sure why it's being recommended here, it doesn't have much to say about the current topic, except in that it often points out that authoritarian governments are free to ignore any feedback, feedback of the kind that incentive schemes are usually designed to generate.


You pointed out exactly why I mentioned it:

> except in that it often points out that authoritarian governments are free to ignore any feedback, feedback of the kind that incentive schemes are usually designed to generate.

A view of the modern state is made more accurate if one views the state as an entity desirous to ignore feedback from citizens, rather than actively responsive to citizen opinion.

Robert Caro has written amazing books. I'm half-way through this two-hour dialog titled "On Power"[0].

He says he essentially stumbled into writing The Power Broker: Robert Moses and the Fall of New York[1] after observing how Robert Moses (an unelected bureaucrat) completely controlled huge "democratic" institutions.

I mention the books because they highlight some of the means and ways in which the state encourages it's people to think it is responsive to them, but this responsiveness is a slight of hand, even as plenty of participants in the system (both as citizens and as representatives of the state) earnestly believe that the state is responsive to public input.

[0]: https://www.audible.com/pd/On-Power-Audiobook/B06XNKVH16

[1]: https://www.goodreads.com/book/show/1111.The_Power_Broker


If you invert the colours in the chart, you also get the consequences of their failure modes. In designing tokenization schemes some years ago, we used a trust model like this, and the basic problem reduces to the adage, "a security system is only as strong as its recovery process."

In the case of the field of trust models Vitalik has illustrated, trust comes down to the questions of: do you have a way to tumble your root of trust, do you leave it static (like an HSM with destroyed keys), or do you federate. The answer is of course, "it depends..." In the case of the balance of consensus, proofs, and anonymity (e.g. ZK) the application defines the needs.

The issue I think is that these problems are all negatively defined and presume a threat model before a use case. They are artifacts of that threat model, in that they wouldn't exist if they weren't a reaction to it. These things (blockchains and their applications) are essentially criticisms that allow people to organize and defect to a certain extent, but they are lacking a quality of essentialness I can't seem to find a name for. It's like they aren't discovered things, but just artifacts of a constraint. Such fun to read his stuff.


>If you invert the colours in the chart, you also get the consequences of their failure modes.

Forgive me if I am misunderstanding what you are saying, but it sounds to me like you are suggesting that the consequences of a 1-of-N failure are inherently worse than the consequences of an N-of-N failure, which is not a fundamental truth in any way. It is entirely possible for a 1-of-N system to have better recovery modes than an N-of-N system, in fact it's often much easier to tumble / enhance / improve the root of trust in a 1-of-N system than it is to do so in an N/2-of-N system. (for example, in the case of rolling trusted setup ceremonies vs. multi-party-computation).


The point I was making is that in the 1/N system, the failure of the one person or element brings the whole thing down, it's a single catastrophic failure mode, where when N/N at the other extreme, the failure is contained to that group. The partial ones mean that the number of people/parts that have to fail is greater to bring the whole system down. (the example case is storing a single shared secret, which if compromised, means re-enrolling all N people in that secret again)

You can replace a root of trust, but then you have to re-enroll all parties into it. When you have partial or federated trust, the failure is contained to the N/x group.

I'm thinking there may be a universal trade off between number of trusted parts and the consequence/cost of failure.

The concept being that a trust model is essentially a non-reversible function you iterate from its initial unique "trusted," conditions (like derived keys, or a certificate chain), and if those unique initial conditions are replicated, you have to re-compute the entire function from a new unique initial condition, or risk "fake" branches.

A root of trust is the root node of a tree (or a DAG these days), and a compromise of any of the roots will effectively isolate its downstream branches. Compromising a single N/1 root means you compromise the whole tree, where a federated multi-rooted tree means the damage can be contained - in the model I'm thinking of.

TL;DR: the cost of the recovery mode in the 1-of-N case requires re-instantiating or enrolling all of N, which basically means bootstrapping the whole scheme. Fine for a closed system, Hard for an open one.


I think you misunderstand what a 1 of N system means here. It means out of N participants, any of them are sufficient to keep the network safe.

In any system, if N out of N participants are down, you are going to have a hard time resetting. Typically it is easiest to reset from this failure in a 1 of N system because you only need one person to recover, and it doesn't matter which person.

I'm oversimplifying a bit, but in most cases a 1 of N system is strictly more robust than a N/2 of N system. (Or any system that requires more than one honest participant).


I think I do get your point, however my emphasis is on when you replace "person," with "key," or "secret," it explains the consequences of the key distribution problem.

The system that requires only one honest participant seems stable, except when that one person has a small but non-zero likelihood of compromise. A system that requires N/2 honest people - each of whom have a probability of compromise, it depends on whether that probability is independent or dependent. In the dependent case, you are correct, the more people you need to trust, the higher the likelihood of one bringing whole thing down, but in the independent (e.g. federated) case, the consequences are contained.

I'm saying that the diagram as interpreted from the perspective of the consequences of failure shows that if you turn the 1/N section red instead of green, it indicates the level of catastrophe. We could be running up against the limits of the chart's heuristic analogy, but viewing trust as flowing downstream over time from the root node of a tree , and asking how many root nodes of trust you need for a robust system, shows that the fewer the roots of trust, the greater the impact of a compromise.

I'd considered whether replacing those node keys with multipart keys would change things, but the whole concept of a compromise is an exogenous phenomenon, that is, what this trust tree/graph is made of doesn't change the effect of some super force finding a way to compromise a node. If a trust model is just a graph, then it will be the properties of that graph that determine the qualities of the model - and not what the nodes and relationships themselves are made of.

That final point would be my big-leap conjecture.


It seems odd to not discuss the more fundamental issues of trusting the infrastructure. All this virtual reality is on top of a physical reality controlled by governments and powerful interests. What happens when said powers declare cryptocurrencies illegal (app store removals, RST packets, etc.), or try to take them over with brute force?


I want to defend Vitalik here and say it's unreasonable to expect him to address everything in a single blog post, there are a lot more failure modes than just malicious networks, none of which are explicitly mentioned. [1]

Each of these kinds of trust fit relatively neatly into the framework he proposes though. Blockchains are designed to run on tens of thousands of nodes distributed around the world, which naturally insulates them from any one country deciding they don't want cryptocurrencies to be used within their borders. And even if every other country decided to ban Bitcoin, if America still allowed it then Bitcoin the protocol would happily continue to work within America's borders, in that sense it has 1-of-N trust in countries allowing it.

On the other hand, if America or China decided they did not want Bitcoin to exist any more, and they were truly willing to do whatever it takes to shut it down, they have enough tools at their disposal to shut it down. This situation doesn't quite fit into V's framework, I think because some of the N are more important than others?

[1]:

- Do you trust whoever writes your client / mining software to write code without showstopping bugs?

- Do you trust them to use a reliable build process?

- Which websites do you trust to tell you what their public keys are, so you can check the signatures of the binaries you run?

- Do you trust whoever sold you the computer you're running your client / mining software on?

- Do you trust the mining rig manufacturers?

- If your protocol relies on a notion of time, do you trust the time servers?

- Do you trust mining pool operators not to use their hash power maliciously?

- Do you trust that if someone found a way to forge signatures, the rest of the world would know?


This tends to be a focus of the Bitcoin community a lot more than other cryptocurrency communities. Bitcoin has technologies such as ASN based sybil attack protections [1], satellite broadcasts that cover most of the land area of the earth [2], and setups that allow Bitcoin to be broadcast over Ham Radio [3].

Of course that's not to say the problem is ignored by other communities. Many people are well aware of the full set of dependencies of these crypto projects and the ways that external forces might be disruptive. And many people are working on increasingly sophisticated ways to eliminate these dependencies or ensure viable alternatives if worst comes to worst.

[1]: https://github.com/bitcoin/bitcoin/issues/16599

[2]: https://blockstream.com/satellite/#satellite_network-coverag...

[3]: https://www.wired.com/story/cypherpunks-bitcoin-ham-radio/


These are all cool technologies which raise the bar an attacker must meet in order to disrupt Bitcoin, but it's worth noting that at very least America and China could absolutely overwhelm those defenses.

The security of nakamoto consensus (Bitcoin and co) relies on a level of broadcast which is pretty incompatible with a world where the network is hostile and looking to disrupt your traffic.


> relies on a level of broadcast which is pretty incompatible with a world where the network is hostile and looking to disrupt your traffic.

It would be prohibitively expensive and difficult to disrupt point-to-point radio (e.g. ham) and satellite communications enough to cripple Bitcoin.


The satellite doesn't help very much. It doesn't help at all if you want to send a transaction or Mike your own blocks. It's also very easy to know whether it's broadcasting. So if the mechanism by which Bitcoin is banned is legal, it's easy to find the owner and prosecute them for breaking the law.

I'm not incredibly familiar with ham, but if you ever broadcast then you're inviting the authorities to find you, and shut you down. If you strictly communicate point-to-point it doesn't seem likely that you'd be able to tell the entire world about new blocks within a reasonable about of time. If you keep the 10 minute block interval then delays longer than, say, 1 minute, would be extremely problematic. I'm not sure how a Ham radio network could reliably tell the world about new blocks within 60 seconds without alerting the authorities.


Good points.

Oh and FYI the idiom is "worse comes to worst" (analogous to "push comes to shove" -- an intensification past an activation threshold or tipping-point)


Assuming N isn't 1, they should be able to survive a ban by a single government. Can be extended to multiple governments


> It seems odd to not discuss the more fundamental issues of trusting the infrastructure.

It would be odd if the Blockchain / Crypto-currency market came crashing down. Like most nutritionists who do not understand the popularity of sugary drinks or fast food chains, I think, news.yc fails to understand the Crypto-currency / Blockchain market.


I am sure that nutritionists _understand_ the popularity of sugary drinks, as in, they know what human motivations are, and they know why people prefer them to better foods.

But nutritionists also _understand_ that sugary drinks are bad in the long run, and that is why they keep saying that one should drink less of them, over and over. If at least few people get convinced, stop drinking sugary drinks, and avoid diabetes and heart disease, they nutritionists succeeded.

The same logic applies to blockchain/crypto-currency warnings that HN commenters produce.


Some of them are censorship resistant


His views are just simplistic, too simplistic. Additionally, he delves in topics that have been already looked at in a more abstract way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: