> When the position probability distribution is made narrower, the momentum probability distribution becomes wider ... Note that this is not a limitation of measurement
I see this a lot, but what's the reason we know it's not the case?
Quantum physicist here. This is the way I like to think about it. A quantum system corresponds to some vector (i.e. the state of the system) and there are an infinitude of possible bases to write down the vector. Position and momentum are two bases of the same vector space that don’t share any element (not only that, they are particularly incompatible in the sense that you need ALL the element of one basis to decompose a single element of the other), so if you have a very localized system (e.g. that roughly corresponds to only one basis element in the position basis) you’ll need a lot of elements of the momentum basis to describe the same vector, and vice versa. The key to the puzzle is that there’s no escaping the uncertainty principle because position and momentum are two incompatible points of view of the same vector space (and of the same vector if you think about the state).
Update: I think part of what’s confusing about this is that classically we can have pretty much any combination of position and momentum, but quantum mechanically it’s the opposite: once you know the state in the position basis, its momentum representation is also determined and vice versa.
The infinite sum of a harmonic series is an impulse train at its fundamental frequency. That is, if you push a swing at all overtone frequencies, the only push that lines up is the initial push, while all the other ones don't line up and cancel to zero for the rest of the period. And if your resonance length is all of free space, your fundamental frequency is zero and you get a single delta impulse.
In other words, the more harmonic components there are that interferes with each other, the more localized your spatial distribution is. The fewer interfering components there are, the more spread out it is.
At the classical limit, we assume there are practically infinitely many interfering components, with all of free space as the resonance length. Ergo we have perfect localization
At the quantum scale, interacting components are few and the length scale is highly constrained, ergo we get very spread-out (due to having few overtones) and quantized (due to small resonance length) position wavefunctions.
The relevant theorem is that variance(position-space function f) * variance(Fourier transform of f) >= some constant.
A wavefunction is a wavefunction, it doesn't have more frequency components at the classical scale. It's just that what looks like a large position spread at the quantum scale is pretty tiny compared to classical length scales, and what looks like a large momentum spread (variance of the Fourier transform) is pretty tiny compared to classical-scale momenta.
You shouldn't think of a wavefunction as being perfectly localized, with infinitely spread out frequency components, because that would mean the particle's momentum is infinitely uncertain. Instead, think of a Gaussian function, whose Fourier transform is a Gaussian. The widths of those two Gaussians are inversely proportional to each other.
Also, you should think about continuous Fourier transforms, not discrete Fourier series. Periodic wavefunctions are only the norm in situations like crystals where the environment itself is periodic.
Isn't continuous Fourier Transform simply the limit of Fourier series as the fundamental frequency approaches zero though? That's what I meant when I said "all of free space", it's the same idea with the emergence of "continuous" band structures in bulk materials -- although their sizes aren't strictly infinite, for all practical purposes it's long enough that the energy levels are sufficiently close-together to be considered a continuum.
And yes, I stand corrected. My analogy with overtones bandwidth vs spatial localization instead pertains to the position-momentum tradeoff, it is indeed incorrect to overgeneralize it to what the classical limit means. The classical limit, as you correctly pointed out, is more about the tradeoff between the two being practically negligible in the length/impulse scale being dealt with in the classical regime -- effectively a "rounding error", so to speak.
To lower the abstraction level a bit: In tangible 1-D Schrodinger first-year QM terms: 1) All of the information about the state is in the wave function. 2) Per Fourier type limits, you cannot even represent in a wave function known position and known momentum past a limit on the multiplcative product of calculable "spreads" or uncertainties. (Smaller spread in position as implied in a wavefunction forces larger spread in momentum as calculated from the same.)
This can clarify that the limit is not because "measurement introduces disturbance". Rather, representation forces spread(s).
Question: Let's say we added a single particle X (or, well, multiple) to our otherwise quantum world whose position and momentum we could pin down with arbitrary accuracy. What would happen if we used that particle to measure another particle Y? Normally Y has some fundamental uncertainty around its position and momentum, but would having access to X let us bypass that limit? If not, what would happen to the total uncertainty in the system after the measurement?
(If that would allow us to break the limit, then it would seem like this really is a measurement limitation in some sense, so I'm assuming the answer is that that's not the case, but then I'm wondering what would happen afterward.)
Bypass the limit?
No.
Partially because you need to propagate that measurement somehow to your brain.
But assuming that we have the "pin down perfectly" part (idk how that would even be possible):
Mostly because the position/momentum of the measured particle would immediately become uncertain right after the measurement (it becomes a cloud of probabilities again).
The total uncertainty in the system would likely decrease, since you can think of it like wavefunction collapse. Of course, everything depends on how exactly the situation works mathematically, which we haven't actually defined yet.
To be clear, it is very well known that it isn't a measurement limitation. It's similar to asking "what the length of an oval" is, since there are many ways to measure an oval. It's not a "well-defined" question, and in fact, quantum mechanics requires it to be not-well-defined.
For more info, check out the fourier transform and how that necessitates the heisenberg uncertainty principle.
I (and many other physics people) find that people have a very hard time even trying to accept the nature of reality when it comes with uncertainty, and I think that's very normal. But according to our best known models, uncertainty is the nature of reality, and not because we can't measure enough.
Thanks. Yes, I get the Fourier transform thing (and Bell's theorem etc.), but imagine asking Newton the consequences of finite-speed gravity. He could either imagine how to reconcile that with his theory and hypothesize what the outcome might be (e.g., he might say if the sun exploded then you'd find out 8 minutes later), or he could tell you it's intrinsically infinite and that your question is like asking "what the length of an oval" is. I find one of these answers more satisfactory than the other.
Now, in this particular case: I understand superdeterminism would mean the entire world could be deterministic and yet consistent with QM, correct? And I understand a reason why the world might be superdeterministic is that everything is already correlated/entangled together (say, from the big bang), thus making this in fact inherently a "measurement problem" in that we don't have any unentangled measurement particles available... right?
If you buy that so far, then here's where I'm trying to go with this: if QM's response to this is "well, if you did obtain such an unentangled particle, you could use it to reduce the uncertainty in your next measurement beyond your current limits", then it seems to me QM is in favor of the world being superdeterministic, and the uncertainty we face is more coincidental than fundamental. Whereas if QM's response was "well, even if you had such a particle, you couldn't use it to reduce the uncertainty in your next measurement", then it seems to me QM believes the measurement limit is more fundamental than coincidental. Given the situation seems like the former to me, is there any reason to bet against the world being superdeterministic? If anything, it seems to me that superdeterminism has the big bang going for it, no?
> Let's say we added a single particle X (or, well, multiple) to our otherwise quantum world whose position and momentum we could pin down with arbitrary accuracy.
That particle X does not exist. We can't add it to the world.
It's not like I was claiming it exists. I'm asking, if you did observe one tomorrow, what would be the theory's best prediction of what would happen next?
And yes, I get the whole Fourier transform uncertainty making the math inconsistent, but that's not an answer for me here. Like if you asked Newton "if gravity didn't travel instantaneously, what would be your theory's best prediction?", he would probably be able to give you a better answer of what he expects the consequences would be than "that's impossible, the math would be inconsistent".
If you observe one tomorrow, then the theory is all wrong and people will all yell "Hey, new physics", celebrate, and try to discover why nobody has seen anything like it before. There is no "best prediction", the entire theory is invalid.
It would be like asking Newton "hey, if gravity didn't exist at all, and things traveled at path-dependent trajectories, what would your theory predict?" The answer is that it doesn't predict anything.
Your question itself assumes away quantum mechanics. There is fundamentally no fully certain particle in QM, the theory cannot meaningfully make predictions about something that's fully certain.
I've been out of school a while so might be wrong but maybe, just maybe, you could torture some mathematics into giving you some infinities if you really wanted to get a "prediction"?
But it's sort of like asking "how would linear algebra work if all nonzero matrices were invertible?" Well, all matrices aren't invertible, some definition of matrix that allows for nonzero matrices to be inverted is just different.
I think the more general question they're asking is whether it's possible to hack the math to break the uncertainty principle. As an example, if you take a Monte Carlo estimate of some quantity, you have uncertainty associated with that estimate. But there are ways to reduce that uncertainty using additional information (e.g. control variates, sampling tricks, partially solving the problem analytically, etc.). The example they're giving for a hypothetical particle with zero uncertainty might not be a great one, but I think the idea itself shouldn't be dismissed outright.
I think that misses the point of the exercise. What the OP is asking is more analogous to using negative energy in conjunction with existing models to see what would happen. It's not reflective of reality, but it tells you something about the limits of the model.
As GP said, position and momentum for a given system are the same thing represented using different basis vectors.
Let's use an more intuitive example, outside of quantum mechanics, in regular linear algebra in R3. If you have a something in 3D space, you can come up with multiple different coordinate systems to describe things in that space. Once you specify the position of an object in one coordinate system, the position of that object in all other coordinate systems is also determined.
In quantum mechanics, position and momentum have this sort of relationship. If you specify the position distribution of a something, you can transform it to its corresponding distribution in momentum space.
Another comparison is with frequency/time of a signal. Once you specify how a signal looks over time, its frequency spectrum is determined and vice versa; the two are Fourier transforms of each other.
Imagine you want to calculate what frequencies are present in a sound. At the width of the time window you are looking at goes to zero, you just have a single point. There's no frequency content of a single point! It's just a number. As you look at a longer time window, you can see what frequencies are present over the window, but you lose some sense of time "resolution" over which that is meaningful. This is fundamental and inherent in the Fourier transform.
Put differently, you can't have a signal that is simultaneously both "band-limited" and "time-limited". Other conjugate variables, like position and momentum, work the same way.
They are conjugate variables (i.e. Fourier transform duals) which arise in quantum mechanics simply due to the matter wave nature of all quantum objects
I think this is fundamentally a question of interpretation of quantum mechanics. We (society/physicists) do some set of experiments, and come up with a model to explain all of the resulting measurements. Quantum mechanics is a model that explains almost all of the phenomena that we observe. However, it is fundamentally only a description of our observations.
To answer the question directly, in quantum mechanics, a state (of some particle, or collection of particles, as in the blog post) is represented as some vector, and we represent "operators" (which can represent position, momentum, spin, etc...) as matrices acting on the state vectors. When we do a measurement, the state is projected onto an eigenvector of the operator. If we do multiple measurements we have to project multiple times, along different bases. This process is not necessarily commutative. If I measure position and then momentum I will get a fundamentally different result than if I measure momentum and then position.
If we make measurements of two observables that have the same basis, then the two matrices will commute, and there is no such limitation. However, with non-commuting observables this is a fundamental limitation, no matter how good your measurement is, you will always be projecting the initial state in different ways depending on how you measure the different observables.
In quantum mechanics, the impossibility of localizing in both position and momentum simultaneously is a corollary of the more fundamental principle of wave-particle duality:
1. Everything has both particle and wave characteristics.
2. Position comes from looking at the thing from the particle perspective, while momentum comes from looking at the thing from the wave perspective.
3. It's mathematically impossible for a thing to be a "perfect" point particle and also a "perfect" wave (i.e. a sine function).
> the uncertainty principle is inherent in the properties of all wave-like systems... it arises in quantum mechanics simply due to the matter wave nature of all quantum objects. Thus, the uncertainty principle actually states a fundamental property of quantum systems and is not a statement about the observational success of current technology.
Because it is part of the theory of quantum mechanics itself. It is not just an empirical fact, it is a prediction of the theory itself.
Similar behavior shows up in Fourier Transforms [1]. When an audio sample is converted into frequency-space, it shows similar "narrowing/widening" relationship with the original audio sample. This again isn't a limitation of measurement, its just the relationship between the two bases
Are you asking why we say that the momentum is physically spread through a distribution instead of a precise value that we just happen to not know?
The hypothesis that it has a precise value is called "hidden variable theory" if you want to search for it. We know that several variants of it are false because of how particles interfere with each other and how they can become correlated. The interference math is quite simple (the fact that particles have destructive interference means they aren't just a lot of stuff with unknown properties), but the math on correlation is complicated.
There is still an hypothesis called "superdeterminism" that say that all the information about the entire past and future of particles always existed with infinite precision. That one we can't rule out.
It has to do with the wave-particle duality of matter. As you more finely attempt to measure the momentum (a more fundamental measurement compared to kinetic energy) and its location, you run into the wave-like nature of matter. It becomes more of a distribution rather than a point measurement. I won’t attempt to show the mathematics behind this, I’ll leave that for a physicist or mathematician.
There was an article posted here recently that argued quite well for the wave-only view, avoiding the wave-particle duality by accepting that particles don't exist:
In classical mechanics momentum is the derivative of position with respect to time. In quantum mechanics momentum is the gradient, i.e. the combination of the partial derivatives with respect to the three directions of space, of the wave function [1], i.e. the probability amplitude of the particle as a function of position.
In consequence, classically one can prepare a particle with any position and any momentum independently, i.e. one fixes the position and the time derivative of the position. Quantum mechanically fixing the position distribution also fixes the momentum distribution as the later is the derivative of the former with respect to space and therefore can not be specified independently. Similarly to the classical case one can also fix the time derivative of the wave function but that corresponds to the energy, not the momentum.
You can't specify the time derivative of the wavefunction independently, because the Schrodinger equation is first-order (unlike classical dynamical laws which are second-order).
It has been a while since I studied solid state physics but yes, exactly correct.
The fun effects of semiconductors rely on the electronic properties of certain solids (or their electrons, equivalently their absence ie ‘holes’) which take place at a small scale, hence the use of quantum mechanics.
Some of these effects can manifest at larger scales, and can be modelled classically too.
While every mosfet transistor have mode could considered genuine quantum (tunnel effect), it is not used in computers, it used in microwave generators/detectors.
What used in computers, are just switch mode (electric field creating/moving/changing conducting zone), which is mostly classic physics and created as fruit of garage engineering, without understanding of quantum mechanics.
And yes, this is very abstract border, as I know, 100% quantum effects considered when working with single atom or single electron (something <= 1nm), but for example, 100nm transistors, usually considered as classic physics.
BTW it is very interest, what will show image sensor with ~1nm sized cells. Now their cells are much larger than wavelength, but already created superlenses, could focus on nm-sized things.
I see this a lot, but what's the reason we know it's not the case?