Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the need for hardware decoding stinks because it makes capable hardware obsolete since it can't decode new video.




Hardware acceleration has been a thing since...forever. Video in general is a balancing act between storage, bandwidth, and quality. Video playback on computers is a balancing act between storage, bandwidth, power, and cost.

Video is naturally large. You've got all the pixels in a frame, tens of frames every second, and however many bits per pixel. All those frames need to be decoded and displayed in order and within fixed time constraints. If you drop frames or deliver them slowly no one is happy watching the video.

If at any point you stick to video that can be effectively decoded on a general purpose CPU with no acceleration you're never going to keep up with the demands of actual users. It's also going to use a lot more power than an ASIC that is purpose-built to decode the video. If you decide to use the beefiest CPU in order to handle higher quality video under some power envelope your costs are going to increase making the whole venture untenable.


I hear you but I think the benefits fall mainly on streaming platforms rather than users.

Like I'm sure Netflix will lower their prices and Twitch will show fewer ads to pass the bandwidth savings onto us right?


Would anyone pay NetFlix any amount of money if they were using 1Mbps MPEG-1 that's trivially decoded on CPUs?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: