Was trying to install guix on top of fedora silverblue. It’s kinda working, but not exactly stable…
Some updates after sleeping on it and trying some morning debugging:
- It’s actually either service being enabled that prevents login
- It’s a gnome-shell issue. Logging into a tty is fine, and shows that it’s gnome-shell crashing when trying to log-in normally
Maybe it’s time to go back to debian…
Debian hasn’t done me dirty yet
I had this with a sunshine service being added as a user service in bazzite. I created a clean new user and it booted, confirming it was user based. Took a bunch of binary searching to work out what the issue was.
I’ve since done my own autostart setup for sunshine and it’s been fine ever since.
Crappy UX!
Yeah, thinking I might have to do something similar to start the services after login. Unfortunately they need to run as root, so it’ll be tricky to avoid having a second password prompt every time I login
Ouch, yeah that’s frustrating. I’m considering doing my own image (prei stall my own apps) which will help with issues like this and allow consistent apps across machines.
Feels like a sledgehammer for a nail though
I just don’t get these for a bare metal system. Containers? Sounds great. Definitely on board. Bare metal? Debian, standard fedora, or gentoo is what makes sense to me
Workstation-as-code is pretty dope for enterprise…
The idea of an immutable, idempotent, declarative workstation, from cradle to grave, tickles me pink.
At that point, make it a thin client which boots from a network image and logs you into a terminal server.
Then you have the hardware and software resources you need for your role wherever you are.This sounds like a great idea until you have multiple physical sites and dhcp doesn’t span network segments. Or, even if you’re willing to deal with that, employees who work from home. Anything that solves for the second one is almost certainly more complicated than just using vms or containers on remote workstations or a configuration manager on the workstation os and not waste your time on the thin client part
So… containers that people log into…? Falls under containers
I recently brought over some ideas from VanillaOS over to my Arch install.
- Install as much as possible via flatpak
- Install a bunch of other stuff in distrobox (with podman backend)
That gives me like 50% (idk fake number) of the features from VanillaOS, but I get to keep control over my system.
Not that I ever had any problems with native
pacman
installs though… so… not sure how much benefit I’m really getting from doing this. I guess mypacman -Syu
command runs faster now. That’s something…Not judging but just fyi, that’s like the worst of both worlds tbh. The point of installing independently of the base system is that the system is immutable and easy to roll back to a previous state, if you use a mutable system and also install packages with other means, you’re working around a limitation that isn’t even there and wasting more space to get almost none of the benefits (aside from easier permission control for Flatpaks)
Why are you even running arch at that point, for the DE updates?
Byebye to your storage 😆
Honestly, my current stance on immutable distros is: why don’t you have a mutable distro and just try to follow the best practices without being forced to?
Install flatpaks, use Distrobox when something is only available as a standard package, but doesn’t actually depend on non-isolated system interaction, etc.
This way, nothing breaks the way it does with immutable distros, but you still have a reasonable level of confidence in your system.
To me, the main advantage of using an atomic distro is that I use my own custom image. It comes with all the packages I need from rpm, and all of my config included. Switching between different machines is a breeze now.
BlueBuild makes creating custom images super easy.
Fair point!
But again, this is mostly useful in a production environment, not as a home user imo.
For home? Yes. For professional use where you have to deploy and support tens to hundreds of desktops? Immutable + a proper build tool chain is the best thing since sliced bread. And when you already have that, a copy of that for home makes it good for home use too.
Sure, I had to make that distinction. I only mean personal home use here.
Yeah, I’m leaning toward this option tbh.
If we got to the point where popular machines had custom images with all the necessary extra drivers etc, it might be a value add. But for now I’m not seeing a huge benefit
I initially tried guix -> switched to nix with home-manager because it’s got a lot better repos -> installed all user packages through nix on Debian -> nixos
Before nixos I used flatpaks for some packages because nixgl seems abandoned.
follow [best practice]
Install flatpaks
Dude. Find a security guy who knows about validation and supply chain risks. Tell that person those two phrases. Learning should commence if they’re any good.
Wow.
We’re talking risk for the system here.
where i get into trouble is when i do a bunch of
nixos-rebuild —switch
es between restarts and some state ends up hanging around, so next time i do a reboot that ephemeral state is gone and whoops no internetIf you’re not already, just erase your darlings.
Then you can preview what files are lost on reboot (see blogpost).
That was a really good read, thanks for the link!
Its the ghost, the ghost in the machine. Major Kusanagi would be proud
Not specifically a nixos issue, also at least nix gives you rollbacks
Immutable seems like a good idea and it is for security or for a console-like PC but for any sort of intermediate or advanced user, it’s not such a good idea.
From my experience it’s quite the opposite, cause when something breaks in guix/nix/bazzite you basically need to know how the entire subsystem works to troubleshoot it.
You can’t just copy paste some nonsense from superuser to fix it.
You’re comparing apples and oranges if you focus on immutabilty alone.
EndlessOS may certainly cater to the idea of being user-friendly but unbreakable, but I would consider NixOS to be an advanced distro.
Why is that? What do you feel is the downside?
I suppose it depends on the OS. But the Universal Blue OSs, Bazzite, Bluefin, and Aurora are the ultimate tinkerer’s OSs even though they are immutable.
Wait, what? I’m legit not familiar with immutable distros, is it like you’re only allowed to modify certain directories?
In simplified terms:
You are allowed to modify stuff but it is not actually changing the install as is.
This is achieved by different techniques like file system overlays, containerisation, btrfs snapshots and so on.
The idea is to replicate the classical behavior you know from embedded devices that have their core functionality in ROM with even firmware updates only overlayed or modern smartphones: You can modify your system but in the end there’s always the possibilty to “reset to factory settings” as in: the last known working configuration.
So, baby-proofing Linux?
We prefer “security hardening” but yes that… Also works lol
I’d describe it as making computer systems reliable.
This kinda response is so funny to me. I’ve seen similar attacks on Rust, and all I can assume is either you’re in the 0.01% of users who are ideal use cases and have never had an issue caused by something that could have been prevented by immutability, or you just have that crab bucket, “well I put up with the frustration, so everyone else should have to too!” mentality
I’m not even here to claim that immutability is ideal for everyone, but “haha you like to not waste your time unfucking your OS” is not the epic burn you think it is
After seeing folks on lemmy who wiped their /boot and did other funny stuff I must ask you: do you think your argument is all that righteous?
The idea of immutable distributions does not trigger me: there are valid use cases for that too. But the whole parroting of brainrot “I’ve got my system fucked, so immutable distros go brrrrr” sounds more and more like a band of childlike people looking for anyone to blame but themselves
I don’t care if something could or could not have been prevented with immutability with my system, but I always care of the following: this next thing I am going to do with the system, am I prepared to deal with it if something goes sideways or not. Now that looks like a burn to you or what?
Kinda. Generally the user files (including custom installed applications) are on a rw partition. Whereas the system files (OS files, root folder, etc) are on a ro partition. When updates are applied to the core system they come as complete images. No compiling from source on the fly.
The advantages to this is that it should be near impossible to break your system. If you need to roll back to a previous version the system just/downloads/mounts the previous image. There is less flexibility in terms of changing system files. But the idea with immutable distros is that you shouldn’t be modifying system files anyways, and there are different ways to accomplish things.
A really good example is Android. Android (non-rooted) is kinda-sorta an immutable distro. Except it uses an A/B partition method, where the active system downloads and installs to the other partition, triggers a flag, then a reboot picks up the flag and boots from the newly installed partition. If anything goes wrong, another flag is triggered and it boots from the “good” partition.
It’s not quite the same, but at a high-level it kinda is.
Edit: article I found about it
https://linuxblog.io/immutable-linux-distros-are-they-right-for-you-take-the-test/
Yes, kind of.
Someone might correct me if I’m wrong but it’s that, plus extra tooling to redirect the stuff that needs to be writable, plus more extra tooling to allow you to temporarily unlock the read-only parts in order to do system updates, plus a system updater that puts the whole system more-or-less under version control.
It’s similar to using Deep Freeze on Windows where outside of specific writeable directories anything that shouldn’t be changed isn’t allowed to change.