camilobotero@feddit.dktoSelfhosted@lemmy.world•Self-GPT: Open WebUI + Ollama = Self Hosted ChatGPTEnglish
1·
1 month agoI can confirm that it does not run (at least not smoothly) with an Nvidia 4080 12Gb. However, gemma2:27B runs pretty well. Do you think if we add another graphical card, a modest one, maybe the llama3.1:70B could run?
Stop it Bender. Wait, not. Indeed, tell me more about this idea yours.