• 0 Posts
  • 5 Comments
Joined 2 months ago
cake
Cake day: February 9th, 2025

help-circle

  • I apologize if my phrasing is combative; I have experience with this topic and get a knee-jerk reaction to supporting AI as a literacy tool.

    Your argument is flawed because it implicitly assumes that critical thinking can be offloaded to a tool. One of my favorite quotes on that:

    The point of cognitive automation is NOT to enhance thinking. The point of it is to avoid thinking in the first place.

    (coincidentally from an article on the topic of LLM use for propoganda)

    You can’t “open source” a model in a meaningful and verifiable way. Datasets are massive and, even if you had the compute to audit them, poisoning can be much more subtle than explicitly trashing the dataset.

    For example, did you know you can control bias just by changing the ordering of the dataset? There’s an interesting article from the same author that covers well known poisoning vectors, and that’s already a few years old.

    These problems are baked in to any AI at this scale, regardless of implementation. The idea that we can invent a way out of a misinformation hell of our own design is a mirage. The solution will always be to limit exposure and make media literacy a priority.


  • Respectfully, you have no clue what you’re talking about if you don’t recognize that case as the exception and not the rule.

    Many of these early generation LLMs are built from the same model or trained on the same poorly curated datasets. They’re not yet built for pushing tailored propaganda.

    It’s trivial to bake bias into a model or put guardrails up. Look at deepseek’s lock down on any sensitive Chinese politics. You don’t even have to be that heavy handed, just poison the training data with a bunch of fascist sources.