Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's plausible to me that cooling becomes exponentially harder as you aim for lower temperatures but I don't understand why you think you need lower temperatures for more qbits? The whole point of quantum error correction is that ones you reach a constant threshold you can use more iterations of error correction to decrease you logical error rate without decreasing your physical error rate.


That's also plausible. My guess is the same logic would apply — you would need exponentially more error correction for a linear increase in qbits.


They're not talking about lower temperatures, but greater volume.


I had assumed both — greater volume and also lower temperatures. The more qbits you are trying to get to cooperate, the less thermal noise you can tolerate, so the cooler you have to make it.

Yes, more qbits also take up more space, but I hadn't thought of that as a major factor — but it certainly could be if they are physically large! Overall I think the bigger issue is cooling.

The article mentions error correction as an alternative to increased coherence among the qbits. Perhaps what that really means is "to increase meaningful interaction among qbits, make it as cold as you can, and then error correct until you get a meaningful answer." My intuition is that they are both a battle against entropy — the need for error correction will also increase exponentially with the number of qbits, simply because the number of possible combinations increases exponentially.

And the even larger outcome of all this is that if this intuition bears out, quantum computing will have no fundamental advantage over conventional computing — which also has an exponential cost for a linear increase in bits, for computations such as factoring large numbers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: