Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Super resolution refers to imaging below the refraction limit, more or less by having the receiving sensor within a wavelength or two of the material being imaged, allowing you to use the nearfield (which doesn't have a diffraction limit, but which also doesn't propagate beyond a couple wavelengths) instead of the farfield (which does, and does).

It's unrelated to the nvidia marketing term for ai filtering of images.



Nvidia's DLSS Super Resolution doesn't do anything with bypassing the diffraction limit, but does go beyond the single image nyquist limit by undersampling the input render, jittering the projection matrix each frame, and reconstructing higher resolution with frame history. It's reconstructing real extra detail. Some parts of it like handling disocclusion areas between frames etc. are fully hallucinated though.

With camera movement and things like gaps between camera sensor elements acting as the undersampling, their video super resolution may be learning similar ways of legitimately reconstructing at a higher res from temporal data, though it also hallucinates some in and is dealing with already compressed video where a lot of that might be lost.

I'm not sure if their video super resolution actually does this kind of temporal stuff though or it is just a repeated image upscale, but some of the AI video upscalers do and get much better upscaling on longer windows of frames than when run frame by frame without context.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: