I’m currently running Deepseek on Linux with Ollama (installed via curl -fsSL https://ollama.com/install.sh | sh
), and I specifically have to run it on my personal file server where all my data is because it’s the only computer in the house with enough memory for the larger models, but as a result I’m more concerned about security than I would be if it was running on a dedicated server that just does AI. I’m really not knowledgeable on how AI actually works at the execution level, and I just wanted to ask whether Ollama is actually private and secure. I’m assuming it doesn’t send my prompts anywhere since everything I’ve read lists that as the biggest advantage, but how exactly is the AI being executed on the system when you give it a command like ollama run deepseek-r1:32b
and have it download files from where it’s downloading from by default? Is it just downloading a regular executable and running that on the system, or is it more sandboxed than that? Is it possible for a malicious AI model to scan my files or do other things on the computer?
Most models now are .safetensor files which are supposed to be safe, but I know in the past there were issues where other model filetypes actually could have attack payloads in them.
yeah, this was definitely not a silly question to ask