One of the strongest points of Linux is the package management. In 2025, the world of Linux package management is very varied, with several options available, each with their advantages and trade-offs over the others.

  • GalacticGrapefruit@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    3 hours ago

    Don’t mind me, being a casual user since 2014 taking down notes as I’m reading the debates in the comments.

    But I finally found out why Steam kept crashing. Snap broke it. I forced it to run as a flatpak, and now it works exactly as intended. Literally what made me finally switch from Ubuntu to Mint.

  • LordKitsuna@lemmy.world
    link
    fedilink
    arrow-up
    21
    arrow-down
    5
    ·
    6 hours ago

    pacman is the best and I’ll stubbornly refuse to entertain any other opinion. It’s in my experience the least likely to just randomly rip the system to shreds. I don’t know if it has more through prechecks or what bit I’ve had debian and Fedora (apt and dnf) rip the system asunder trying to jump multiple major versions in an update of a system that hadn’t been online in a long time.

    I don’t care if jumping multiple releases at once “isn’t supported” it shouldn’t be that frail and arch will happily update something many years behind as long as you update the keyring.

    Even in the event your system somehow does get hosed you can fix almost everything by just chrooting in, grabbing the static pacman binary, and running “pacman -Qqn | pacman -S -” I’ve recovered systems that had the entire /bin wiped (lol oops moment with a script) and as far as i know apt and dnf have no equivalent easy redo all.

    • Max-P@lemmy.max-p.me
      link
      fedilink
      arrow-up
      4
      ·
      4 hours ago

      Pacman just does a lot less work than apt, which keeps things simpler and more straightforward.

      Pacman is as close as it gets to just untar’ing the package to your system. It does have some install scripts but they do the bare minimum needed.

      Comparatively, Debian does a whole lot more under the hood. It’s got a whole configuration management thing that generates config files and stuff, which is all stuff that can go wrong especially if you overwrote it. Debian just assumes apt can log into your MySQL database for example, to update your tables after updating MySQL. If any of it goes wrong, the package is considered to have failed to install and you get stuck in a weird dependency hell. Pacman does nothing and assumes nothing, its only job is to put the files in the right place. If you want it to start, you start it. If you want to run post-upgrade, you got to do it yourself.

      Thus you can yank an Arch system 5 years into the future and if your configs are still valid or default, it just works. It’s technically doable with apt too but just so much more fragile. My Debian updates always fail because NGINX isn’t happy, Apache isn’t happy, MySQL isn’t happy, and that just results in apt getting real unhappy and stuck. And AFAIK there’s no easy way to gaslight it into thinking the package installed fine either.

    • IsoKiero@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      5 hours ago

      I have absolutely zero experience on pacman, but I could argue the very same with dpkg/apt with the same arguments. The Debian kind, not the abomination Ubuntu ships with today.

      as far as i know apt and dnf have no equivalent easy redo all

      It’s similarily possible (dpkg --get-selections, some sed/cut/awk wizardry to cut unnecessary stuff from the output, xargs to apt install --reinstall on that and you should be good to go, maybe there’s even a simpler way to achieve that) with Debian.

      But that’s just me. I’ve been with Debian for quite a while. Potato was released 2000, but I think I got my hands on it 2001/2002 and I’ve been a happy user since. And even if I’ve worked with pretty much any major distribution (RHEL, CentOS, SuSe, Ubuntu and even Slackware back in the day) around I still prefer Debian because that’s what I know and learned over the years on how to fix things if something goes sideways.

      • LordKitsuna@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        4 hours ago

        I think the missing key there is the independent statically built binary for apt that does not depend on pretty much any part of the base system actually functioning. That’s what I couldn’t find, is there one and I just suck at Google?

    • Aatube@kbin.melroy.org
      link
      fedilink
      arrow-up
      3
      ·
      5 hours ago

      Agreed. The normal pacman CLI does have a comparatively much higher learning curve though compared to e.g. APT. It’s not that hard to learn either but when you’re scrolling over a long-ass manpage, you do not immediately realize from the headers which whizz by in a flash that -S (alias for --sync) is for installing from repos, -Ss is for searching from repos, -S does not by itself “synchronize” with repos by pulling newest repo package metadata because well that’s not what we’re “synchronize”-ing with and you have to add the “y” flag, -Su (remember to add “y”!) is for upgrading all packages instead of -U (alias for --upgrade), and -U is for installing a local package. Compare that to the APT/dpkg system’s apt install, apt search, apt update, apt upgrade, and dpkg -i.

      Admittedly APT does need one to get behind the fact that there are different commands and that “update” and “upgrade” are different, but that’s way less to remember (especially since apt is meant to be the interface for everything a user should do) compared to remembering pacman’s interesting definitions of database, query, sync, upgrade, and maybe files, while the only definition unlikely to be guessed with APT IIRC is update vs upgrade. You’re far more likely to need a pacman cheatsheet than an apt cheatsheet.

      But in the end, let’s all love libalpm, and the actual code behind that pacman interface.

      • nanook@friendica.eskimo.com
        link
        fedilink
        arrow-up
        2
        ·
        1 hour ago

        @Aatube @LordKitsuna @Tea There are some things I’d like pacman to do automagically that it doesn’t, like update the list of archives when they change. Tried to install a package the other day and it kept throwing 404 errors because I had a stale list of archive sites. It didn’t tell me that, it didn’t fix it automatically.

  • vermaterc@lemmy.ml
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    6 hours ago

    Pretty good article, went into some technical stuff, which surprised me as in Linux world I’m used to articles discussing changes in wallpapers between different distro releases :D

    • Onno (VK6FLAB)@lemmy.radio
      link
      fedilink
      arrow-up
      1
      ·
      5 hours ago

      Wallpaper, yeah, there’s a lot of that going around. The main feature discussed with the recent new release of apt discussed colour as the primary new feature. No mention of any actual substantive changes or reference to the impact on apt-get et al., or even a link to the detailed change log.

  • kixik@lemmy.ml
    link
    fedilink
    arrow-up
    5
    arrow-down
    6
    ·
    5 hours ago

    I’ve tried in the past flatpak packages, they are terrible in many senses the proponent (vast majority AFAIK) don’t say, among them:

    • They create huge static binaries
    • One gets many libraries embedded in the static libraries or local static libs to the package which are often repeated among many static binaries, even the same version of them. This is totally avoided when building against dynamic native libraries.
    • When installing a pletora of static dependencies for a package, lets say liri, a bunch of the stuff it requires might already bi installed natively in your system, but they need the static deps locally part of the package.
    • Care must be applied, there are statistics available about abuse on vulnerabilities infection on pypi, npm and so on, this no different on these packagers repos/hubs.

    Good that they provide an alternative way to install packages not available in your distro repos, but for that user repositories building against native libraries are a much better option, like AUR in the case of Arch, and even their binary packages coming from other distros or from upstream might be even better than those universal static binaries providers.

    There are political aspects involved in the past claim from the proponents, and it’s that in their view gnu+linux echo-system should become like the windows one, where everyone company or org (to them doesn’t matter) should be able to provide their binary packages, and then there’s no reason to think of anyone being able to build their staff.

    There’s a tendency actually on providers on those hubs, to ignore problems on people who tries to build their stuff on their own, claiming they only support those universal packages. Which to me it’s dangerous, since it goes in detriment to the ability to build and distribute the software, which might not be due to licenses, but rather practical reasons. This might actually be against the licenses they use, but now a day who cares, right, it’s available on that packager repo…

    Lastly one argument provided in favor of the apps coming from those universal packages is sandboxing. But there’s firejail which can be install on most gnu+linux distributions, and comes with profiles for a pletora of apps, and if sandboxing is not enough, it can easily be combined with apparmor, or if you prefer selinux might be used… No need for those universal packages to have a sandboxed experience.

    One final note, so far gnu+linux has been characterized by having a diversity, which is good, that diversity offers people options to choose from, and a lot of different solutions for different purposes. Not so long ago the claim was that it was not good, that meant fragmentation, and fragmentation is bad for adoption and maintenance. I see it the other way around, this diversity allows for choosing for what aligns better with the user intends, like easy to use, or rolling release, or as vanila as possible, or as up to date as possible, or as hardened as possible, etc, etc. Systemd is another example of this universalization intended. Perhaps some distros prefer to be a shell for systemd and get packages just from universal packages, that’s bad news to me.

    Of course having universal packagers present an oportunity for corps and orgs to also provide stuff on the gnu+linux platform, but in my mind the best would be for them to offer free/libre and open source software, that would build on any system and be provided by any packager that wants to offer it. Though I believe that to be too idealistic perhaps. Jeje.