Hi all!

I’m trying to use a local LLM to help me write in an Obsidian.md vault. The local LLM is running through a flatpak called GTP4ALL, which states that it can expose the model through an OpenAI server. The obsidian.md plugin can’t reach the LLM on the specified port, so I’m wondering if the flatpak settings need to be changed to allow this.

(This is all on bazzite, so the obsidian program is a flatpak too)

Anyone have an idea where to start?

      • Toribor@corndog.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 day ago

        I’ve not tried GPT4ALL but Ollama combined with Open WebUI is really great for selfhosted LLMs and can run with podman. I’m running Bazzite too and this is what I do.