I would love to ask some hard questions of this solution.
Lets say I have a very simple workflow.
Camera and CG in -> conversion to RGB/YCbCr -> compositing -> pass to broadcast encoder
Conversion and compositing can be done at scan line speeds using conventional hardware, so latency is at most a frame. With an asynchronous workflow this is not possible anymore, let’s pretend the network infrastructure isn’t an issue and is operating perfectly with low latency, I don’t need cots hardware on the processor because without some sort of DMI even NIC -> GPU is several frames of latency, the GPU then needs to do the processing, then you again need to GPU-> NIC.
I then need to reorder frames at the receiving device, the stream encoder. Because its async, frame 2 might arrive before frame 1. So now I need a buffer there.
I do not see how this doesn’t add significant latency to the path even in the most simple setup. Add internet services like AWS and your latency shoots of to tens of frames before you even hit the media encoder.
I’ve seen stuff going to and back from AWS with sub second latency, even with h264 encoding.
BT sport have operated remote production channels from AWS for a few years now [0]
I’m no fan of cloud, but the latency to a nearby DC isn’t high, and when your feeds are going from the field anyway it doesn’t make much difference.
I remember OBS (the Olympics, not the open source software) bemoaning the lack of bandwidth available in cities and them having to build data centres in place like Beijing for temporary events, because there simply isn’t the multi-bit links to AWS etc available.
The hardware AWS would be using, NICs etc do have the level of DMI needed for the latency.
Heres the catch though, the workflow I described can be done with maybe 2 frames of latency for like 400usd, and you own the hardware . I wouldn’t be surprised if some of AWS products are 400usd per hour.
You can setup a very competent broadcast system for not much money and use some actual cots hardware and still have significantly less latency that a second, probably in the low single digit frames.
I don’t the solution proposed by matrox isn’t possible, I just think for most use cases it is very expensive both in actual cash and in latency.
Lets say I have a very simple workflow.
Camera and CG in -> conversion to RGB/YCbCr -> compositing -> pass to broadcast encoder
Conversion and compositing can be done at scan line speeds using conventional hardware, so latency is at most a frame. With an asynchronous workflow this is not possible anymore, let’s pretend the network infrastructure isn’t an issue and is operating perfectly with low latency, I don’t need cots hardware on the processor because without some sort of DMI even NIC -> GPU is several frames of latency, the GPU then needs to do the processing, then you again need to GPU-> NIC.
I then need to reorder frames at the receiving device, the stream encoder. Because its async, frame 2 might arrive before frame 1. So now I need a buffer there.
I do not see how this doesn’t add significant latency to the path even in the most simple setup. Add internet services like AWS and your latency shoots of to tens of frames before you even hit the media encoder.