I’ve read multiple times that CUDA dominates, mostly because NVIDIA dominates. Rocm is the AMD equivalent, but OpenCL also exists. From my understanding, these are technologies used to program graphics cards - always thought that shaders were used for that.

There is a huge gap in my knowledge and understanding about this, so I’d appreciate somebody laying this out for me. I could ask an LLM and be misguided, but I’d rather not 🤣

Anti Commercial-AI license

  • mehdi_benadel@lemmy.balamb.fr
    link
    fedilink
    arrow-up
    0
    ·
    21 hours ago

    Check implementations before saying shit like that. Nvidia has historical bad open source driver support, which makes it hard for people to implement vGPU usage. They actually actively blocked us from using their cards remotely, until COVID hit. Then they gave out the code to do it. They are still limiting customer level cards usage on virtualization cases. They had to give out a toolkit for us to be able to use their cards on docker. Other cards can be accessed just by sharing dev driver files to the volume.

    • skip0110@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      21 hours ago

      Can you share sample code I can try or documentation I can follow of using an AMD GPU in that way (shared, virtualized, using only open source drivers)?

      • mehdi_benadel@lemmy.balamb.fr
        link
        fedilink
        arrow-up
        0
        ·
        19 hours ago

        Check Wolf (in my other comment), it’s the best example of GPU virtualization usage.

        Otherwise you can check other docker images using GPU for computing, like jellyfin for instance, or nextcloud recognize, nextcloud memories and its transcoding instance,…