Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

With enough lines, enough brightness, and a high-enough refresh rate, it may become possible have a display that can artificially emulate the features of a CRT -- including phosphor persistence and blooming and focus issues and power supply sag and everything else, with interlacing. AFAICT, we aren't there yet.

If/when this happens, we may be able to again view things as close as they were in broadcast, but with a modern display instead of an an old CRT.

If we can find any content to play that way, anyhow. A lot of it is cheerfully being ruined.

Aside from the dozens of us who are interested in that, most of the rest of the folks seems convinced that television looked just as terrible as a blurry SLP VHS tape does after being played through a $12 composite-to-USB frame grabber, using a 3.5mm "aux cable" jammed in between the RCA jacks of the VCR and the converter, and ultimately delivered by an awful 360p30 codec on YouTube, before being scaled in the blurriest way possible...and draw from this the conclusion that there's no details that have any value worth preserving.

Even though television was never actually like that. It had limits, but things could be quite a lot better than that awful mess I just described.

(For those here who don't know: Towards the end of the run, the quality of a good broadcast with a good receiver would often be in the same ballpark as the composite output of a DVD player is today (but with zero data compression artifacts instead of >0), including the presentation of 50 or 60 independent display updates per second.)



> With enough lines, enough brightness, and a high-enough refresh rate, it may become possible have a display that can artificially emulate the features of a CRT -- including phosphor persistence and blooming and focus issues and power supply sag and everything else, with interlacing. AFAICT, we aren't there yet.

To truly do that, you need to display over 20 million frames a second.

Why?

True analog video didn't capture frames, but instead each pixel was transmitted / recorded as it was captured. This becomes clear when watching shows like Mr. Rogers on an LCD. When the camera pans, the walls look all slanted. (This never happened when viewing on a CRT) This is because the top part of the image was captured before the bottom part. I wouldn't even expect a 60i -> 60p deinterlacer to correct it.

That being said, I don't want to emulate a CRT:

- I want a deinterlacer that can figure out how to make the (cough) best image possible so deinterlacing artifacts aren't noticeable. (Unless I slow down the video / look at stills.)

- I want some kind of machine-learning algorithm that can handle the fact that the top of the picture was captured slightly before the bottom of the picture; then generate a 120p or a 240p video.

CRTs had a look that wasn't completely natural; it was pleasant, like old tube amplifiers and tube-based mixers, but it isn't something that I care to reproduce.


I definitely understand you very well, and I agree.

Please allow me to restate my intent: With enough angular resolution (our eyes have limits), and enough brightness and refresh rate, we can maybe get close to what the perception of watching television once was.

And to clarify: I don't propose completely chasing the beam with OLED, but instead emulation of the CRT that includes the appearance of interlaced video (which itself can be completely full of fields of uncorrelated as-it-happens scans of the continuously-changing reality in front of the analog camera that captured it), and the scan lines that resulted, and the persistence and softness that allowed it to be perceived as well as it once was.

In this way, panning in an unmodified Mr Rogers video works with a [future] modern display, sports games and rocket launches are perceived largely as they were instead of a series of frames, and so on. This process doesn't have to be perfect; it just needs to be close enough that it is looks the ~same (largely no better, nor any worse) as it once did.

My completely hypothetical method may differ rather drastically in approach from what you wish to accomplish, and that difference is something that I think is perfectly OK.

These approaches aren't exclusive of eachother. There can be more than one.

And it seems that both of our approaches rely on the rote preservation of existing (interlaced, analog, real-time!) video, for once that information is discarded in favor of something that seems good today, future improvements (whether in display technology or in deinterlacing/scaler technology, or both) for any particular video become largely impossible.

In order to reach either desired result, we really need the interlaced analog source (as close as possible), and not the dodgy transfers that are so common today.


Some screens will scan out top to bottom at 60Hz and mostly avoid that skew. If you took an OLED that does that, and added another mechanism to black out lines after 3ms, you'd have a pretty good match to the timing of the incoming signal


I don't want that kind of flicker, though. I just want the image to look good:

> I want a deinterlacer that can figure out how to make the (cough) best image possible so deinterlacing artifacts aren't noticeable.

> I want some kind of machine-learning algorithm that can handle the fact that the top of the picture was captured slightly before the bottom of the picture; then generate a 120p or a 240p video.

---

If we want to black out lines after 3ms, we might as well bring these back: https://www.youtube.com/watch?v=ms8uu0zeU88 ("Video projectors used to be ridiculously cool", by Technology Connections.)


Okay, so you're not interested in the flicker versus persistence tradeoffs.

In that case you just need a 60Hz-synced scanout, and you can get screens that do that right now. That will beat any machine learning stabilizer.


For anything from a camera, you want the right kind of smoothing and some amount of phosphor emulation, but the source usually has a long enough shutter time to make the latter less important.

With pixel art go ahead and add bloom, and faster phosphors become more important. But not much else. I don't think anything wants focus issues and power supply sag.


I just want to eventually be able to view NTSC video like it used to be viewed, but without preserving a CRT to do so with.

It was originally proofed on a CRT. It may have been a ridiculously-good and Sony BVM in excellent calibration, but it was still a CRT that had CRT issues.

(I don't care much if nobody else wants that experience. I'll build it myself when I feel that modern displays have become flexible-enough to accomplish my goals.)


A high quality CRT wouldn't have a lot of those issues. And a lot of content wasn't proofed on anything, it's raw camera output. Either way if something came from a camera there's no intent to have half of CRT's problems, and they very likely didn't do any compensation for those problems that would look bad on a better screen.

Wanting the whole package of CRT flaws isn't just playing a record, it's playing a record with a cheap needle. Go ahead if it feels more nostalgic or right to you, and I wish you luck in finding what you want. But I don't think it adds anything, and to some extent it detracts from the original.


I hope that you have a very nice day, and wish that you may reconsider the value of telling others the way in which they shall (and shall not) enjoy their media.


I didn't tell you that. I said "Go ahead if it feels more nostalgic or right to you, and I wish you luck in finding what you want."

So I hope you have a nice day too but also I hope that you don't keep a wrong idea of my comment in your head.

I wasn't being fake and patronizing, there are reasons to want that, it's just that those reasons aren't usually original authorial intent.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: