I’ve read multiple times that CUDA dominates, mostly because NVIDIA dominates. Rocm is the AMD equivalent, but OpenCL also exists. From my understanding, these are technologies used to program graphics cards - always thought that shaders were used for that.

There is a huge gap in my knowledge and understanding about this, so I’d appreciate somebody laying this out for me. I could ask an LLM and be misguided, but I’d rather not 🤣

Anti Commercial-AI license

  • moonpiedumplings@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    Now, I don’t write code. So I can’t really tell you if this is the truth or not — but:

    I hear that OpenCL is much more difficult and less accessible to write than CUDA code. CUDA is easier to write, and thus gets picked up and used by more developers.

    Someone mentions CUDA “sometimes” having better performance, but I don’t think it’s only sometimes. I think that due to the existence of the tensor cores (which are really good at neural nets and matrix multiplication), CUDA has vastly better performance when taking advantage of those hardware features.

    Tensor cores are not Nvidia specific, but they are the “most ahead”. They have the most in their GPU’s, and probably most importantly: CUDA only supports Nvidia, and therefore by extension, their tensor cores.

    There are alternative projects, like how leela chess zero mentions tensorflow for google’s Tensor Processing Units, but those aren’t anywhere near as popular due to performance and software support.