Half of these exist because I was bored once.

The Windows 10 and MacOS ones are GPU passthrough enabled and what I occasionally use if I have to use a Windows or Mac application. Windows 7 is also GPU enabled, but is more a nostalgia thing than anything.

I think my PopOS VM was originally installed for fun, but I used it along with my Arch Linux, Debian 12 and Testing (I run Testing on host, but I wanted a fresh environment and was too lazy to spin up a Docker or chroot), Ubuntu 23.10 and Fedora to test various software builds and bugs, as I don’t like touching normal Ubuntu unless I must.

The Windows Server 2022 one is one I recently spun up to mess with Windows Docker Containers (I have to port an app to Windows, and was looking at that for CI). That all become moot when I found out Github’s CI doesn’t support Windows Docker containers despite supporting Windows runners (The organization I’m doing it for uses Github, so I have to use it).

  • Raccoonn@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    11 hours ago

    GPU passthrough has always been one of those exciting ideas I’d love to dive into one day. My current GPU being a little older, has only 4GB of RAM. Oh the joy’s of being a budget PC user. Thankfully it’s more of a “would be nice rather” than an “actually need”…

  • billwashere@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 hours ago

    Well I do but I have a machine with 3/4 of a terabyte of memory on it.

    Work scraps are great sometimes.

    How are you running the MacOS VMs. The machine I have is a cheese grater so that makes it easier.

    • olympicyes@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 hours ago

      Are you running macOS or Linux as your host? My MacBook is M1 and I found the performance running ARM windows and ARM Fedora via UTM (qemu) to be pretty good.

      • billwashere@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 hours ago

        On the cheesegrater(2019 MacPro) it’s a little convoluted. During covid times it was my single box lab since it had so much memory (768TB). So I was running nested ESXI hosts and then VMs under that. I also have a M1 MacBook Pro that I had parallels run ARM VMs (mostly MacOS, Windows, and a couple of Debian installs I think).

        I have been looking at VMWare alternatives at work so for the hypervisors I’ve been playing around.

        I do this stuff for a living but I also do it home for fun and profit. Ok not so much profit. Ok no profit but definitely for the fun. And because I love large electric bills.

        • olympicyes@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          6 hours ago

          That’s a beast of a Mac. Wake on lan is your friend. I have the same problem with my Threadripper. I wrote a script that issues a WOL command to either start/unsuspend my Ubuntu machine so I can turn it off when not in use. It’s probably $70/month difference for me. Most of my virtualization is on Linux but I’ve moved away from VM Ware because QEMU/KVM has worked so well for me. You should check out UTM on the Mac App Store and see if that solves any of your problems.

          ETA: https://mac.getutm.app/

      • billwashere@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 hours ago

        Ok I’ll have to try this. The weird thing is my little test proxmox server is a 2013 trashcan. So this would be like a hackintosh running on Mac hardware. Would that technically be a hackintosh? I’m not really sure. According to the Apple license you can virtualize MacOS if it’s running on Mac hardware. I’m not sure if that requires MacOS as the hypervisor. Regardless this is not something I knew about. Very cool. Thanks for the info.

  • wulf@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    15 hours ago

    I run a different LXC on Proxmox for every service, so it’s a bunch. Probably a better way to do it since most of those just run a docker container inside them.

    • WasPentalive@lemmy.one
      link
      fedilink
      English
      arrow-up
      0
      ·
      15 hours ago

      Why mix docker and VMs? Isn’t docker sort of like a VM, an application-level VM maybe? (I obviously do not understand Docker well)

      • Kovukono@pawb.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        13 hours ago

        Serious answer, I’m not sure why someone would run a VM to run just a container inside the VM, aside from the VM providing volumes (directories) to the VM. That said, VMs are perfectly capable of running containers, and can run multiple containers without issue. For work, our Gitlab instance has runners that are VMs that just run containers.

        Fun answer, have you heard of Docker in Docker?

      • lazynooblet@lazysoci.al
        link
        fedilink
        English
        arrow-up
        0
        ·
        14 hours ago

        I like to run a hypervisor host as just that, a hypervisor host. The host being stable is important, and also reduce attack surface by only having it as that.

        An LXC per service is somewhat overkill. A docker host running on LXC could likely run all the docker containers.

  • veroxii@aussie.zone
    link
    fedilink
    arrow-up
    0
    ·
    16 hours ago

    Not VMs but I have way more docker containers. I run most things as containers which keeps the base OS nice and clean and free from dependency hell.

  • Auster@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    19 hours ago

    On the joke, define “sane”. 😬

    On a serious note, I think there are valid reasons to have several VMs other than “I was bored”. In my case, for example, I have a total of 7 VMs, where 2 are miscellaneous systems to test things out, 2 are for stuff that I can’t normally run on Linux, 2 are offline VMs for language dictionaries, and 1 is a BlissOS VM with Google programs in case I can’t/don’t want to use my phone.

    • Dagamant@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      13 hours ago

      Nah, most of the windows ones don’t get updates any more and the Linux ones can get a script that updates on boot. Takes longer to start up but handles the job itself.

  • InverseParallax@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    19 hours ago

    Yeah.

    My home server runs that many, but it’s a monster dual xeon.

    The freebsd instances have a ton of jails, the Linux vms have a ton of lxc and docker containers.

    It’s how you run many services without losing your mind.

  • Flyberius [comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    20 hours ago

    I’ve had physical esx servers running this many VMS simultaneously, and I can totally see why a hobbiest or dev would have a need for this many VMs on standby. You are sane, yes