Im giving a go fedora silverblue on a new laptop but Im unable to boot (and since im a linux noob the first thing i tried was installing it fresh again but that didnt resolve it).
its a single drive partitioned to ext4 and encrypted with luks (its basically the default config from the fedora installation)
any ideas for things to try?
Did you reformat the disk before installing? I’ve seen similar fails when the disk is still encrypted. The installer can’t get a hold of a previously encrypted disk. If there’s no valuable data in the disk, load up a live distro run gparted and nuke the disk blank and pristine again, as gparted doesn’t care about encryption. Then try the installer again.
@dustyData @evasync When I install, I generally prepare the partitions ahead of time with gparted, whether or not I create an entirely new partition table depends upon whether it is the only OS on the disk or there are multiple. I’m not using any encrypted file systems, I need the machines to be able to boot without my being present to type in a password or pass phrase. So that is not an issue.
no, I just removed them with the live cd and repartitioned it (Im assuming its doing the same thing under the hood?)
You should let the installer do the partitioning. Silverblue and immutable systems are nitpicky about it. Specially if luks is involved. The whole point is that you shouldn’t meddle with the system at a low level at all.
@dustyData @evasync I’ve been working with Linux since 1992, I have a better idea of how I want my disks laid out then an installer script.
rm /home
mkdir /home
make /var/home a symlink to it.
Alternative, edit your /etc/fstab to mount on /var/home.Don’t you have that backwards? This is an atomic distro, and you’d want to
mkdir /var/home
then symlink/home
from that, no? Otherwise, you’ll wind up with a home directory that is immutable.@Telorand I am not familiar with that distro, I am however familiar with how mount works. As far as what is immutable and what is not, you can set with chattr +i file/directory or chattr -i file/directory.
editing the /etc/fstab didnt work (I just changed the path but not sure if the uuid plays any part) but ill give the rm/mkdir part a go
Did you update your initramfs after? The new fstab doesn’t apply until you refresh that
No but I rebooted the system after the change. do still need to update it regardless the reboot?
Edit: Probably try @[email protected]’s solution of
systemctl daemon-reload
first.Yes. When booting, your system has an initial image that it boots off of before mounting file systems. You have to make sure the image reflects the updated fstab.
@data1701d @evasync You don’t have to reboot to effect that, systemdctl daemon-reload will reload the /etc/fstab file.
You might be right. I was thinking of it in terms of a traditional distro, as I use vanilla Debian where my advice would apply and yours probably wouldn’t.
From what I do know, though, I guess /etc would be part of the writable roots overlaid onto the immutable image, so it would make sense if the immutable image was sort of the initramfs and was read when root was mounted or something. Your command is probably the correct one for immutable systems.
There well may be hardware issues, but with ext4 it rarely corrupts the entire file system. You might end up with some data not flushed so you’ll have some inodes that don’t point to anything that you’ll remove with fsck upon boot, but btrfs, I’ve had it corrupt and lose the entire file system. I’ve used ext2-through-ext4 for as long as they’ve existed and never lost a file system though back in the ext2 days I had to hand repair them a few times, but ext2 was sufficiently simple that that was not difficult, but within two weeks of turning up a btrfs file system it shit itself in ways I could not recover anything, the entire file system was lost. If I did not have backups, which of course I always do, I would have been completely fuxored. It is my opinion that btrfs and xfs, both of which have advantages, are also both not sufficiently stable for production use.
That’s what she said.
The error says
/home
is a symlink, what if youls -l /home
?Since this is an atomic distro,
/home
might be a symlink to/var/home
.the command returns my user dir and a lost+found dir
that actually sounds like it’s already mounted
What’s in lost+found
yes it is a symlink to /var/home
So shouldn’t you mount your home partition on /var/home instead?
This feels like a winning strategy
Isn’t the default filesystem btrfs? Why did you go with ext4
@possiblylinux127 @evasync I can’t speak for them, but I’ve had btrfs blow up in ways I could not fix. I didn’t just lose a file but the entire file system. I have NEVER had this happen in many years with ext4.
ext4 is just terrible for the inode issue, because you’ll be forced to reformat and reinstall again. Anyone using NixOS or Guix with multiple store write operation should not go near it.
NixOS and ext4 user here with no problems. Care to elaborate?
NixOS and ext4 user here with no problems.
Yet. Just like most of the articles out there, this problem will start showing up around 5 months to a year, depending on how much storage you have and how much
nixos-rebuild switch
/guix system reconfigure
you use - I had around 512GB, so I ran out of inodes quickly, and despite have lots of storage space, the system was unusable for me.Here’s the exact issue that even others have talked about:
TL:DR; is, your NixOS and Guix system will break due to high inode usage, preventing you to access shell even after clearing older generations. In most cases, you can not even clear older generations, simply because you ran out of inodes. More about filesystem has been discussed here.
Seems like this can be prevented from reaching that point by properly deleting old generations regularly though right?
No, those inodes still won’t clear on their own - sure, you’ll be able to prolong for a few weeks or months, but then you’ll reach a point where you’ll end up with just a single generation, and you can do nothing to clear space. The device will mislead you with free space, but they are not accessible, neither can you try to force freeing space by running disk operations manually where the stores are present - because a) that’s a bad idea and b) you’ll not have the permission to. That’s what happened to me, and I had to reinstall the entire system again.
Besides, deleting generations regularly would defeat the point of having a rollback system. Sure, for normal desktop usage, you could live with preserving the last twenty to thirty generations, but this may be detrimental for servers that requires the ability to rollback to every generation possible, or low-end platform constrained with space, and therefore, limited generations.
20 or 30 generations 😹
I have space for 1 😭
Edit: you’ve got me worried now, is the behavior you’re referring to normal running out of inodes behavior or some sort of bug? Is this specific to ext4 or does it also affect btrfs nix stores?
I’ve run across the information that ext4 can be created with extra inodes but cannot add inodes to an existing filesystem.
I’ve only had this happen once and it turned out it was because my ram was shitting out errors that were saved to disk so it ended up not being btrfs’s fault
Was that in the last 5 years? If it was btrfs is now far more stable. It has never blown up for me and it has in fact saved my data a few times.
@possiblylinux127 It was this year. Glad it’s working for you. I’ll stick with what works for me and has provided adequate performance for years.