Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I stand corrected. I should have known people would be doing that in Python.

How many of the world's total FLOPs are going through those?



A lot. OpenAI uses Triton for their critical kernels. Meta has torch.compile using it too. I know Anthropic is not using Triton but I think their homegrown compiler is also Python. xAI is using CUTLASS which is C++ but I wouldn't be surprised if they start using the Python API moving forward.


Anthropic is a Jax shop


Surprised they got it working for Tranium


Triton is a backend for PyTorch. Lately it is the backend. So it's definitely double digits percentage if not over 50%.


It's the backend for torch.compile, pytorch eager mode will still use cuBLAS/cuDNN/custom CUDA kernels, not sure what's the usage of torch.compile


> not sure what's the usage of torch.compile

consider that at minimum both FB and OAI themselves definitely make heavy use of the Triton backend in PyTorch.


Doesn’t triton write its own intermediate language that then compiles to PTX?


It has a fairly standard MLIR pipeline


Yes and?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: