Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The interesting conclusion of all this is that if everything looks like a file, then it doesn't matter what OS it runs on. A /dev/screen can be on your local Plan 9, on a remote Windows or on your Linux VPS; as long as it respects the protocol it doesn't matter. Plan 9 is the host of all this experiment but its findings can be (and have been) imported in other places.


Is that actually useful in practice?

When you're talking about things like displays, performance is extremely important. We're talking about 178MB/s to update a full HD screen at 30 fps, which requires networking pretty much no normal user has.


In my work, there's not much which requires updating a full HD screen at 30 fps... video calls, I suppose. Everything else updates small portions of the screen at lower rates.

There's a program called drawterm which implements Plan 9 graphics devices on Linux. You run drawterm locally, connecting to a Plan 9 system, and your applications on the Plan 9 system draw to your drawterm window over the network. I regularly run it at 4k and it performs quite well.


I'm guessing these applications do not have any kind of animations or smooth scrolling? That would be a simple test, make your web browser or your image viewer fullscreen in 4K and see if there is lag in the scrolling/panning/zooming.


/dev/screen was an example, in practice as said in the sibling comments you'd use drawterm which fulfills roughly the same usecase as what ssh or RDP do, so yes, the use is there. And you may not need a full HD screen at 30 fps to work

But it doesn't stop there. Wanna play local music remotely ? /dev/audio is there for that. Want to use a machine as a jump server ? Just mount their /net folder into yours and any network operation will go through them.

The ideas can be used today. I have a folder of Music with only lossless songs for personal reasons, but it's obviously not perfect for playing from my phone because of how large they are. So I had a server that transcoded them to Vorbis on-the-fly and served them with FUSE, and a sshfs on top of that to serve the transcoded fly to my phone. This composition of a common interface might use no line of code from Plan 9 but it definitely reuses its philosophy.


I think this looks at the benefit backwards. 9p allows resources to be where they make sense and abstract the location from the usage. Running a display over the network might not make sense but with 9p it also isn't necessary. 9p itself allows me to run my GUI locally while the data and processing live elsewhere.


You are seriously overestimating the needed throughput in practice. 60fps 1080p can be streamed with good quality over 16Mbps channel (2MB/s). The real problem is lack of good open source software that will eliminate the annoying latency due to desktop protocols (Xorg...). There are things such as SPICE or X2GO or RDP which are "OK" but I suspect much better experience is possible. The computers are extremely fast already but our software is so bad we can't see it.


178 MB/s is a calculation, not an estimate:

1,920 x 1,080 pixels @ 24 bits/pixel = 6,220,800 bytes/frame

30 frames/s = 186,624,000 bytes/s = 177.98 MiB/s

You are seriously underestimating the simplicity of plugging in a video encoder.


Images can be compressed when using devdraw. The compression formats are relatively primitive, but they're good enough in practice. Slotting in better ones seems like it should be straightforward, though video codecs don't fit cleanly.


So what is your estimation of simplicity of using video compression there? Is it possible?


But once you introduce a piece of software into the middle to make this usable, what's the actual difference between this and just using VNC?

At that point it doesn't really matter if the screen is a file or not -- you need a compressor that can easily provide the output on a network socket, and a client that can perform the decoding.


You're right that it doesn't matter if it's a file or not per se.

What matters - and what the file interface gets you, but you can do the same thing in many other ways - is introducing the concept of a generic pluggable, chainable API.


178MB/s is under 1.5 gb/s. It’s only because we’ve been stuck with slow gigabit Ethernet for 20 years that we think this is a hard problem.

10G ethernet can do it no problem and fractional speed like the 2.5 and 5 gigabit standards should have little issue as well.


I concur that it sucks that Ethernet is in a rut for some reason.

But even on 10G that's no picnic. Sure, that works for a single user, but add a few more people and it's not hard to run into trouble. Such a system can't for instance just drop frames when the network is overloaded which to me makes this more of a curiosity than something anybody would actually want to use in practice.


10G switches are old hat and can do full x-bar switching at 10G, unless your using very old tech got off ebay you shouldn't have issues.

Trunk lines of 100G and higher are pretty common in core networks now, if your big enough to span a single switch. The main limit was we had trouble doing 10G over cat-5e copper with long distances. 2.5/5 solve that problem and 10g is possible with cat-6. Fibre has no issue with super high rates for network backhauls to aggregate all that traffic. Most datacenters are moving to 25gig for server connections.

With the exception of the copper standards all of this has been roled out in the datacenter for years and is pretty mature.


I've speculated it's the patents on 10G over copper holding us back. IIRC we're just about at the point where the early over fiber modes are off patent in the US.

However the 10G (over copper) encoding format uses a complex forward error correction encoding that is a bit energy intensive, it also adds some latency. A smaller silicon production node and this being used outside of SERIOUSLY EXPENSIVE for pro-sumers / medium businesses would instigate a drop via commodity.


There's been some post about upscaling algorithms here lately, perhaps they could be used for diminishing the required bandwidth?


I can see it being very useful for events, call centers, or any kind of operations center where you want a lot of screens.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: