wuphysics87@lemmy.ml to Privacy@lemmy.ml · 2 months agoCan you trust locally run LLMs?message-squaremessage-square20fedilinkarrow-up11arrow-down10file-text
arrow-up11arrow-down1message-squareCan you trust locally run LLMs?wuphysics87@lemmy.ml to Privacy@lemmy.ml · 2 months agomessage-square20fedilinkfile-text
I’ve been play around with ollama. Given you download the model, can you trust it isn’t sending telemetry?
minus-squareacockworkorange@mander.xyzlinkfedilinkarrow-up0·2 months agoIt’s the overhead because of containers or is it because you’re running something that is meant to run on Linux and is using a conversion layer like MinGW ?
It’s the overhead because of containers or is it because you’re running something that is meant to run on Linux and is using a conversion layer like MinGW ?