hohoho@lemmy.worldtoWorld News@lemmy.world•Ukraine Claims the War’s Largest Surrender by Russian TroopsEnglish
35·
1 month agoI’ll just repost this here:
I’ll just repost this here:
The ideal platform will be hardware agnostic
The general rule of thumb that I’ve heard is that you need 1GB of memory for ever 1B parameters. In practice however I’ve found this to not be the case. For instance on a GH200 system I’m able to run Llama3 70b in about 50GB of memory. Llama3.1 405b on the other hand uses +90GB of GPU memory and spills over to using about another 100GB of system memory… but runs like a dog at 2 tokens per second. I expect inference costs will come down over time but for now would recommend Lambda Labs if you don’t have the need for a GPU workstation.
What’s the motherboard model? Which slot are you attempting to use? Is it a physical X16 slot with only a X8 connection? Do you have any other slots available for M.2 drives?