> It’s physically impossible to write down all the digits of 2^^6 — a liability of living in such a small universe.
What a coincidence! Just today I learned that someone found a 49 bit program whose output far exceeds Graham's number [1], and the number 2^^6 features very prominently in it.
So given a start state and a library of valid transitions, it's easy to create a (target, path, destination) tuple that's valid (if you don't care what the destination is). It's easy to check if a (target, path, destination) tuple is valid. It's just really hard to find the path given just the target and the destination.
This sounds like it has properties in common with "multiply two very large primes to create a very large composite number". Could any of this math be used as the basis for a cryptosystem?
Depending on your library of valid transition, it could be really easy to find a path. That's the issue: there are libraries of paths where finding a path is really hard, but libraries where finding a path is easy.
Say in another way, we know only the worst case complexity, not the complexity of a test case taken randomly.
Some latices-based cryptosystems have such properties: there are theorems showing that if there is an algorithm good at solving random cases, than you can derive from it an algorithm good at solving any cases, including the worst ones. So for those problems, the worst case complexity is the same than the "average/random case" complexity.
(I'm not an expert. Take all of this with a grain of salt and just imagine I'm sticking "I think" in front of every sentence.)
It's easy (O(path length)) to check if a (target, path, tuple) is valid, but the paths themselves are of phenomenal lengths:
"He developed a procedure for constructing systems in which the fastest way to determine whether one state is reachable from another is to map out a sequence of transitions between them. That allowed him to use the length of the shortest path between two carefully chosen states as a measure of the difficulty of the reachability problem."
That's the guarantee: we can prove you have to do a ton of computation because you have to traverse this huge path. That's why this whole area of study isn't tainted by the question of whether P=NP. (Formally, because it doesn't have a polynomial-size witness).
So I think you could make a cryptosystem out of this, but, since you'd (for practical reasons) use relatively short paths, you wouldn't get the difficulty guarantees of this research.
It might be possible to compute whether the start and end States are connected without constructing the actual path. As usual non trivial lower bounds on computation are basically non existent.
As an example, we can determine whether a number is composite (not prime) without computing its factors.
I read "He developed a procedure for constructing systems in which the fastest way to determine whether one state is reachable from another is to map out a sequence of transitions between them." as saying that there is a proof that there is not a way to compute whether the start and end States are connected without constructing the actual path.
Without digging too much, I don't think such an argument could be made by this paper. A non trivial lower bound on a concrete problem in a general computation framework would be a marvel on its own.
As another example of what I mean, take sorting. There is an omega(n log n) lower bound that applies to a comparison based algorithm, but no lower bound other than trivial n is known for general computation.
It really boils down to how you define your search space. If you encode your input in a way that is already exponential on some parameter n, then even a linear algorithm would take at least exp(n) time just to read the input.
Let's say you have an algorithm, encoded in L characters, that can produce these extremely long paths, then a natural question is whether for any two vectors of n k-bit integers what is the complexity in terms of nk and L of determining whether they are connected? Note that I don't need to generate/read a huge state graph, I could do something smart by understanding the algorithm.
The classic example of this is diffusion. You can track and even reverse abstract data concepts like temperature but you can't invert particles (reverse but not invert). You can't create a bijective/isomorphic map unless you keep track of the forward time operation. So it's why we talk about entropy because there's many different identical states that lead to a particular end state but the paths are different so you have a uniqueness problem. You don't have identifiable elements even if you have independent components so you're always operating with a family of solutions. Basically, your calculus teacher wasn't being an asshole for stressing that you got the wrong solution when you forgot to add "+C".
If you're confused and wondering if I'm talking about ML or thermodynamics the answer is yes.
That description sounds similar to me of Elliptic Curve cryptography, as well. Pathfinding/curvefitting between two or more points in a curve without knowing other specifics of the curve.
"Can a demoscene program with a very limited program size visualize (or codegolf program output) something specific?" - asking for nontrivial properties like this usually requires actually running each program, and there are unfathomably many short programs (https://www.dwitter.net/ is a good example of this)
"In cookie-clicker games, is it possible to go above a certain score within a certain number of game ticks using some sequence of actions?" - in all but the simplest and shortest games (like https://qewasd.com), this is at least not efficiently (optimally) solvable using MILP and the like, as the number of possible action sequences increases exponentially
And yet, despite these being really hard (or in the general case, impossible) problems, humans use some heuristics to achieve progress
The answer that observed this was equivalent to the Halting Problem wasn't keeping the "within a certain number of proof steps" constraint there, but referring to the question of whether an arbitrary proposition was provable or not provable. If you do specify a number of steps in advance, I don't think the problem is solvable faster than brute force in general, but all individual instances can be decided at least by brute force.
In fact, there are tactics in computer proof systems where you specify a search depth, and the tactic can tell you whether or not there is a valid proof (using certain rules of inference, which probably won't be all the rules of inference allowed by the system!) closer to your current hypotheses than that depth.
This is like the distinction between Gödel's last two definitions in his long chain of definitions in "On Formally Undecidable Propositions":
> 45. x B y ≡ Bw(x) & [l(x)] Gl x = y
> x is a proof of the formula y.
> 46. Bew(x) ≡ (E y) y B x
> x is a provable formula. [Bew(x) is the only one of the concepts 1-46 of which it cannot be asserted that it is recursive.]
Here Gödel notes that all of his previous definitions that used existential quantifiers set an explicit limit for the size of the integer in question (there exists k such that k is less than or equal to ...), but definition 46 doesn't: it just says "there exists an integer y such that y is a proof of x" (according to Gödel numbering).
If you did set a limit on the size of y then you would have a computable function, where the most naive implementation would be to just evaluate everything up to that size limit and check whether it is a valid proof of x (using Gödel's other formulas such as Bw). But it wouldn't match our overall notion of provability, because our notion of provability doesn't set any particular limit on the size of the proof.
> If you do specify a number of steps in advance, I don't think the problem is solvable faster than brute force in general, but all individual instances can be decided at least by brute force.
For systems where you can check a given proof in polynomial time, this problem becomes 'merely' NP-complete. (Technically, the maximum size of the proof you are willing to accept needs to be a polynomial in the length of the statement you are trying to prove.)
Hmmm, isn't the number of proofs to check normally exponential in the length of the proof?
E.g. if there are n proofs with length k (characters, applications of rules of inference, intermediate steps), shouldn't there be n² proofs with length 2k? (In some systems you can prune some, but the extent to which you can do that would depend on the representation of the proof and the allowable inferences.)
Details depend on the proof system. But yes, in general you expect that the number of possible proofs goes up exponentially with their length.
But any single given proof can be checked in polynomial time.
That's almost the definition of NP: NP stands for 'nondeterministic polynomial time'. Which means if you can nondeterministically 'guess' a solution (or someone hands it to you etc), you can verify it in polynomial time.
Generally, for these tasks when we say "humans can solve them" then people are not solving the same problem; they're trying (and succeeding) to find a good solution (where various heuristics work), and often for most, realistic cases only, while the theoretical problem is to have a process that will certainly find a solution that's guaranteed to be the absolute best possible even in weird degenerate cases, and that simply is a very different, much harder problem. E.g. halting problem, traveling salesman, etc, etc have people commenting that "but humans can solve them" in this manner.
>"Can a demoscene program with a very limited program size visualize (or codegolf program output) something specific?" - asking for nontrivial properties like this usually requires actually running each program, and there are unfathomably many short programs (https://www.dwitter.net/ is a good example of this)
This is equivalent to computing the Kolmogorov complexity of the desired output, which is uncomputable - you can pretty easily show that solving this implies solving the Halting Problem.
If the program size limit is generous enough and/or the required thing to visualize is simple enough, you might be able to give the Yes answer easily without running into any major trouble.
My eyes glazed over midway through the article. I'd like them to not try explaining the problem but just out and say what the problem is, then try to explain the details so that I can pick at the pieces I don't get well enough.
I THINK the issue is probably something along the line of:
Every new type of thing adds a new dimension / fold / layer, and operations that transfer between layers can occur in any cycle. This leads to an exponentially complex area of valid states in a high-dimensional setting, likely with upper and lower bounds as the shape exists across dimensions. This sounds possibly intractable to define the more potential actions / actors are involved. Thus it is very difficult to reach or process an equation that defines said shape and thus validates if a co-ordinate within the system space exists on or within the surface of the valid states of the system.
This problem has a deep connection to formal methods: one of the first formal models of concurrent computing was Petri nets, which are equivalent to VASes. Petri nets are very limited in what they can model, and so finding reachability even those tiny, limited programs is HACK-complete!
What I love so much about this is that it's really hard to find intuitive examples of TOWER-complete or even 2-EXPTIME-complete programs, but this one is so much harder than all of them but can easily be explained to a fifth-grader.
Next goal is to find an intuitive HYPERACK problem...
Most of the competition in life comes from other living things. So any organism's failure to survive and thrive is often because some other organism is stealing their lunch money. But the 'lunch money' (eg a spot in the sunlight) doesn't disappear.
> There's a fixed amount of time on earth or perhaps the universe.
For the universe: it looks like the universe will expand forever, and more and more of it will recede behind the cosmological event horizon (because places that are far enough away will recede faster from us than the speed of light). The amount of usable energy within the spacetime we can reach is most likely finite. (Where 'usable energy' include considerations of entropy.)
Time might be unlimited in an endlessly expanding universe, but usable energy ain't.
But here's the savings grace: as far as we can tell, the physical lower limit of usable energy required for doing computations is proportional to the prevailing temperature. And expansion decreases the temperature of space, or rather its background radiation.
So if you have 1J of energy left in your civilisation, you can expand half of it now to do some unit of computation. Then you wait until the temperature of your corner of the universe has dropped in half, and you expend another half of your remaining energy (1/4 J), to do another unit of computation. You can in principle keep doing this indefinitely, and get an infinite amount of computation. You can use that computation to power eg a simulation of life.
---
Of course, this assumes you have found materials or mechanisms to run your computation on that can persist for arbitrary amounts of time in stasis without decay or using energy.
The lower limit of energy usage for computation also only applies for irreversible computation. Reversible computation can be had for free, in principle.
With long enough timescales, Boltzmann brains or even Boltzmann planets or Boltzmann galaxies also become a consideration.
> “That’s a very natural restriction for a model of reality,” said Wojciech Czerwiński, a computer scientist at the University of Warsaw. “You cannot have a negative number of apples.”
I realize this is probably outside the interest bounds of the problem space, but I could indeed have negative apples if I owe someone apples I don’t have.
No, then you would have however many apples you actually have, as well as an obligation to provide someone more apples than you have.
Having an obligation to provide a thing is not actually the same thing as having negative one of the thing, even though it may occasionally be useful for some purposes to treat them as the same thing.
(This is somewhat less true with money and things that work like it, since those things are essentially themselves already a social exoectation of future goods and services from other people, so the difference between an obligation of money and actual money is somewhat less than with other things.)
Interesting result. That's far faster growing than I would have anticipated since it seems intuitively like taking steps in a n-dimensional space which does grow fast, just not in a tetrative sense.
Would adding finite debt change anything? Borrowing capacity is still a floor, like zero, but it's a different floor for every actor. Maybe it'd make more problems solvable without reducing the complexity?
If you can go negative, it seems you've turned the problem into a Diophantine equation, the order of trades doesn't matter, you're just asking if we can get from the start state to the end state by using each possible trade an integer number of times.
If you can go negative but there's a floor on how negative you can go, is it meaningfully different from not being allowed to drop below zero in terms of worst case complexity?
The order of trades still matters in that case, right?
I did a bit of noodling on the wikipedia page for VASS... Maybe I can make an example with software context... and this is probably wrong in some way so take with a grain of salt!
Take as an example a directed graph representing a state transition diagram of a state machine. The machine has some integers (we can call it memory) as its internal state as well as having its graph location. Each state transition moving from one node of the graph on its available outbound transitions has an associated effect on the integers in memory (addition or subtraction particular to each state transition).
The VASS reachability problem:
Given a state (memory values and location in the state graph), can you reach some other arbitrarily chosen state by navigating the transition graph, while also not allowing the integers in memory to become negative. What is the guaranteed maximum time complexity for deciding whether any arbitrarily chosen final state is reachable?
VAS - the same problem but without the directed graph restricting access to vectors. I think this one can be intuited as a "vector walk" that must stay in the positive quadrant going from a given start location to a given end location and a list of available vectors that can be used to move around in the space.
Edit: if someone more knowledgeable is reading, please let me know anything incorrect and I will delete or edit.
> Suppose, for instance, that a kindergartner named Alice has two apples, but she prefers oranges. Fortunately, her classmates have developed a healthy fruit-trading system with strictly enforced exchange rates: Give up an apple, say, and you can get a banana. Can Alice execute a series of trades, by picking up and offloading bananas or cantaloupes, that gets her to her favorite fruit?
I think the GP was asking for more detail than that. We have a dimension n. We have a set of vectors, each with n integer components. In the fruit-trading problem these represent legal trades. We have a starting state, with n non-negative integers. To do a "trade" we select one of the vectors in the set and add it component by component. The result must be non-negative (each component). The question to answer is whether, from a given starting state, we can reach a given ending state. The result here is that solving this reachability problem can be hellishly complex.
What a coincidence! Just today I learned that someone found a 49 bit program whose output far exceeds Graham's number [1], and the number 2^^6 features very prominently in it.
[1] https://codegolf.stackexchange.com/questions/6430/shortest-t...