Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Moreso than the "this exact Canon lens on this exact sensor" I think the question is more "why not this Canon lens approach on normal high-end camera so you don't have to buy a 30k camera which does one specific thing". You do answer that but maybe not in a way that's directly obvious to general extension for those not versed in professional photography or videography.

Blackmagic does have e.g. a 12k non-stereo Ursa Cine but, like you hint at, whatever they can have in the non-stereo can always be better in stereo because a 2x sensor setup has 4x the sensor area as a 1/2 sensor setup. Sensor area (for equivalent class sensors) determines the quality of the recording. When quality is what's important to a professional setting then it doesn't matter (in this market segment) there is a solution which is 20k cheaper if it's always going to be inferior by design. They don't expect to sell many of these to professionals even so it's fine it doesn't make cost sense to the average person.

The rest of everything (recording workflows and settings, IPD, framerates, editing software) can all be identical with either approach but the sensor area is sensor area and there is nothing which can be done to fix that.



> Blackmagic does have e.g. a 12k non-stereo Ursa Cine

But that’s still not 16k of pixels. You don’t even need two 8k sensors to make this work. Just aim the stereo lenses at different parts of a 16k sensor. The Canon solution is simply lacking IPD and pixels.

> Sensor area (for equivalent class sensors) determines the quality of the recording.

This is false. Going to get up on my soapbox again here:

Larger sensors actually have more noise (noise is proportional the square root of the area).

It’s easy to understand the confusion, though: Putting a larger sensor behind the same lens is the opposite of cropping… you get a larger field of view and less image detail. Thus, keeping field of view the same, a larger sensor forces you to use a lens with a longer focal length.

Now, if you re-grind the original lens to have a longer focal length, you encounter another problem: The same physical aperture divided by the new longer focal length means that you have a smaller focal ratio (the number in F/<number> gets bigger). You have a dimmer lens!

So, to keep the same focal ratio (“F-stop”), you need a lens with a larger physical aperture… That larger physical aperture is collecting more light onto your sensor!

That’s why everyone seems to think larger sensors are better. It’s the lens you are forced to use, not the sensor itself.

Since light collected is directly proportional to the area of the lens (and lens area will be proportional to sensor area, see above) and sensor noise is only proportional to sqrt(area), the signal to noise ratio goes as area/sqrt(area) = sqrt(area).

But that’s not the same thing as saying a larger sensor is better… you could have just used a lens with a larger physical aperture in the first place. You don’t need a larger sensor to do that.


As someone who has designed a customised camera with a CMOS sensor, I feel the urge to disagree: in my experience, the biggest issue for quality was that the sensor readout generates heat and that heat triggers random charges in the sensor. Using a sensor with larger pixels means the readout energy is spread over a larger area, thereby having a lower intensity. So in a way, a larger sensor works like a larger heatsink. This effect is also why astronomy photographers cool their equipment.

You're of course correct that the better lens helps. But a bigger sensor can also be better by itself.


> But that’s not the same thing as saying a larger sensor is better… you could have just used a lens with a larger physical aperture in the first place. You don’t need a larger sensor to do that.

Most optical aberrations increase with high powers of the f-number so it's highly undesirable to make ultra-fast lenses, so it quite quickly becomes cheaper to use a larger sensor with a slower f-number. Try matching a jellybean 85/2 lens on a full-frame sensor on e.g. MFT. It's going to be rather expensive. Then try matching a 85/1.4 or 85/1.2 (nowadays not uncommon) lens and you find yourself at "that's not physically possible".

Coincidentally, full-frame sensors can be made from just two stitched exposures on a regular chip stepper, so they're sort of the largest sensor size before cost explodes. Meanwhile S35/APS-C offers some real cost savings (single exposure).


It's an interesting question to compare video quality in mono vs stereo.

In stereo you really do have more visual information. It's not unusual for 10% of the pixels in a stereogram (say a close up of a person) to be unique to one channel. On top of that you have left and right eye pixels that are shared which must be equivalent to more than one mono pixel even if they aren't equivalent to two.

Although I get MPO's with two JPEGs in one file from my New 3DS, stereo content is frequently delivered in side-by-side format as one big JPEG. Stereo movies and TV frequently use side-by-side with half horizontal resolution on the assumption that stereo is feeding your eyes and brains more data although it probably doesn't match the original perceived resolution.


I’m not sure the Ursa Immersive is actually two sensors, though it might be. It’s based on the Ursa Cine 17k (which is shockingly close to the exact resolution needed) so it might be a single sensor as well.

Which would help with synchronized sensor readout.


It claims dual sensors in the product page https://www.blackmagicdesign.com/media/release/20241217-01#:...

Of course it's still possible that's really just one sensor with a logical split, which would be some disappointing marketing.


Ah good catch. I suppose they can effectively halve the 17k in that case.

But very impressive that they have such tight synchronization between sensor readouts to feel comfortable splitting it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: