LLMs (might) make it easier to port code away from CUDA

I was reading this interesting analysis on Nvidia competition (as usual, his blog should be on your feed) from Simon Willison and this bit caught my attention (emphasis mine):
Technologies like MLX, Triton and JAX are undermining the CUDA advantage by making it easier for ML developers to target multiple backends - plus LLMs themselves are getting capable enough to help port things to alternative architectures.

I found it curious that the very same thing that's been fueling Nvidia's success could also help reduce/eliminate their moat.


Popular posts

Mirth: recover space when mirthdb grows out of control

Quasi-code with Apache Camel