Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you invert the colours in the chart, you also get the consequences of their failure modes. In designing tokenization schemes some years ago, we used a trust model like this, and the basic problem reduces to the adage, "a security system is only as strong as its recovery process."

In the case of the field of trust models Vitalik has illustrated, trust comes down to the questions of: do you have a way to tumble your root of trust, do you leave it static (like an HSM with destroyed keys), or do you federate. The answer is of course, "it depends..." In the case of the balance of consensus, proofs, and anonymity (e.g. ZK) the application defines the needs.

The issue I think is that these problems are all negatively defined and presume a threat model before a use case. They are artifacts of that threat model, in that they wouldn't exist if they weren't a reaction to it. These things (blockchains and their applications) are essentially criticisms that allow people to organize and defect to a certain extent, but they are lacking a quality of essentialness I can't seem to find a name for. It's like they aren't discovered things, but just artifacts of a constraint. Such fun to read his stuff.



>If you invert the colours in the chart, you also get the consequences of their failure modes.

Forgive me if I am misunderstanding what you are saying, but it sounds to me like you are suggesting that the consequences of a 1-of-N failure are inherently worse than the consequences of an N-of-N failure, which is not a fundamental truth in any way. It is entirely possible for a 1-of-N system to have better recovery modes than an N-of-N system, in fact it's often much easier to tumble / enhance / improve the root of trust in a 1-of-N system than it is to do so in an N/2-of-N system. (for example, in the case of rolling trusted setup ceremonies vs. multi-party-computation).


The point I was making is that in the 1/N system, the failure of the one person or element brings the whole thing down, it's a single catastrophic failure mode, where when N/N at the other extreme, the failure is contained to that group. The partial ones mean that the number of people/parts that have to fail is greater to bring the whole system down. (the example case is storing a single shared secret, which if compromised, means re-enrolling all N people in that secret again)

You can replace a root of trust, but then you have to re-enroll all parties into it. When you have partial or federated trust, the failure is contained to the N/x group.

I'm thinking there may be a universal trade off between number of trusted parts and the consequence/cost of failure.

The concept being that a trust model is essentially a non-reversible function you iterate from its initial unique "trusted," conditions (like derived keys, or a certificate chain), and if those unique initial conditions are replicated, you have to re-compute the entire function from a new unique initial condition, or risk "fake" branches.

A root of trust is the root node of a tree (or a DAG these days), and a compromise of any of the roots will effectively isolate its downstream branches. Compromising a single N/1 root means you compromise the whole tree, where a federated multi-rooted tree means the damage can be contained - in the model I'm thinking of.

TL;DR: the cost of the recovery mode in the 1-of-N case requires re-instantiating or enrolling all of N, which basically means bootstrapping the whole scheme. Fine for a closed system, Hard for an open one.


I think you misunderstand what a 1 of N system means here. It means out of N participants, any of them are sufficient to keep the network safe.

In any system, if N out of N participants are down, you are going to have a hard time resetting. Typically it is easiest to reset from this failure in a 1 of N system because you only need one person to recover, and it doesn't matter which person.

I'm oversimplifying a bit, but in most cases a 1 of N system is strictly more robust than a N/2 of N system. (Or any system that requires more than one honest participant).


I think I do get your point, however my emphasis is on when you replace "person," with "key," or "secret," it explains the consequences of the key distribution problem.

The system that requires only one honest participant seems stable, except when that one person has a small but non-zero likelihood of compromise. A system that requires N/2 honest people - each of whom have a probability of compromise, it depends on whether that probability is independent or dependent. In the dependent case, you are correct, the more people you need to trust, the higher the likelihood of one bringing whole thing down, but in the independent (e.g. federated) case, the consequences are contained.

I'm saying that the diagram as interpreted from the perspective of the consequences of failure shows that if you turn the 1/N section red instead of green, it indicates the level of catastrophe. We could be running up against the limits of the chart's heuristic analogy, but viewing trust as flowing downstream over time from the root node of a tree , and asking how many root nodes of trust you need for a robust system, shows that the fewer the roots of trust, the greater the impact of a compromise.

I'd considered whether replacing those node keys with multipart keys would change things, but the whole concept of a compromise is an exogenous phenomenon, that is, what this trust tree/graph is made of doesn't change the effect of some super force finding a way to compromise a node. If a trust model is just a graph, then it will be the properties of that graph that determine the qualities of the model - and not what the nodes and relationships themselves are made of.

That final point would be my big-leap conjecture.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: