Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm calling eBPF "kernel shaders" to confuse graphics, os and gpgpu people all at the same time.


Decade and a half ago there was PacketShader, which used usermode networking+GPUs to do packet routing. It was a thrilling time. I thought this kind of effort to integrate off-the-shelf hardware and open source software-defined-networking (SDN) was only going to build and amplify. We do have some great SDN but remains a fairly niche world that stays largely behind the scenes. https://shader.kaist.edu/packetshader/index.html

I wish someone would take up this effort again. It'd be awesome to see VPP or someone target offload to GPUs again. It feels like there's a ton of optimization we could do today based around PCI-P2P, where the network card could DMA direct to the GPU and back out without having to transit main-memory/the CPU at all; lower latency & very efficient. It's a long leap & long hope, but I very much dream that CXL eventually brings us closer to that "disaggregated rack" model where a less host-based fabric starts disrupting architecture, create more deeply connected systems.

That said, just dropping down an fpga right on the nic is probably/definitely a smarter move. Seems like a bunch of hyperscaler do this. Unclear how much traction Marvell/Nvidia get from BlueField being on their boxes but it's there. Actually using the fpga is hard of course. Xilinx/AMD have a track record of kicking out some open source projects that seem interesting but don't seem to have any follow through. Nanotube being an XDP offload engine seemed brilliant, like a sure win. https://github.com/Xilinx/nanotube and https://github.com/Xilinx/open-nic .


What about Nvidia / Mellanox / Bluefield?

It looks like they have some demo code doing something like that. https://docs.nvidia.com/doca/archive/doca-v2.2.1/gpu-packet-...

What kind of workloads do you think would benefit from GPU processing?


I've been exactly thinking about it this way for a long time. Actually once we're able to push computation down into our disk drives, I wouldn't be surprised if these "Disk Shaders" will be written in eBPF.


It's already a thing on mainframes, disk shaders are called channel programs: https://en.m.wikipedia.org/wiki/Channel_I/O#Channel_program


Thank you for this beautiful rabbit hole to chase.


Totally sibling comment confirms it already exists! I hope that the 'shader' name sticks too! I find the idea of a shader has a very appropriate shape for tiny-program-embedded-in-specific-context so it seems perfect from a hacker POV!

I have a VFX(Houdini now, RSL shaders etc earlier) and openCL-dabbling and demoscene-lurking background, based on which I think I prefer 'shader' to 'kernel', that's what OpenCL calls them.. but that conflicts with the name of like, 'the OS kernel' at least somewhat..


I read that as eBNF and was very confused


This analogy works well when trying to describe how eBPF is used for network applications. The eBPF scripts are "packet shaders" - like a "pixel shaders" they are executed for every packet independently and can modify attributes and/or payload according to a certain algorithm.


The name “shader” screwed me up for so long. But once I better understood what they really are I think they’re incredibly powerful. “Kernel shader” is amazing.


I love this name, I hope it catches on


Seems to have worked on me! Well played! :)


not "koroutines"? I like "kernel shaders" though.


I'm stealing this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: