I have forgotten the source where I stumbled on the requirement for a qubit error rate to run the Shor algorithm. I would greatly value it if someone could put a key reference here.
A recent result considers a generic error model and a subclass for particular kinds of prime numbers [1]. The important result is $epsilon > cn^{−1/3}$ where epsilon represents qubit error rate and n number of bits in the factorised number. As $n = 2^N$ where N is a number of qubits, the result:
error_rate < const/2^{N/3}
Therefore, the error rate must decrease exponentially with the number of logical qubits with which the computation is made.
I have never seen a result like that. Do you have a citation?