Im giving a go fedora silverblue on a new laptop but Im unable to boot (and since im a linux noob the first thing i tried was installing it fresh again but that didnt resolve it).

its a single drive partitioned to ext4 and encrypted with luks (its basically the default config from the fedora installation)

any ideas for things to try?

  • dustyData@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    4 days ago

    Did you reformat the disk before installing? I’ve seen similar fails when the disk is still encrypted. The installer can’t get a hold of a previously encrypted disk. If there’s no valuable data in the disk, load up a live distro run gparted and nuke the disk blank and pristine again, as gparted doesn’t care about encryption. Then try the installer again.

    • evasync@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      4 days ago

      no, I just removed them with the live cd and repartitioned it (Im assuming its doing the same thing under the hood?)

      • dustyData@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        3 days ago

        You should let the installer do the partitioning. Silverblue and immutable systems are nitpicky about it. Specially if luks is involved. The whole point is that you shouldn’t meddle with the system at a low level at all.

    • nanook@friendica.eskimo.com
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      4 days ago

      @dustyData @evasync When I install, I generally prepare the partitions ahead of time with gparted, whether or not I create an entirely new partition table depends upon whether it is the only OS on the disk or there are multiple. I’m not using any encrypted file systems, I need the machines to be able to boot without my being present to type in a password or pass phrase. So that is not an issue.

  • nanook@friendica.eskimo.com
    link
    fedilink
    arrow-up
    0
    ·
    4 days ago

    There well may be hardware issues, but with ext4 it rarely corrupts the entire file system. You might end up with some data not flushed so you’ll have some inodes that don’t point to anything that you’ll remove with fsck upon boot, but btrfs, I’ve had it corrupt and lose the entire file system. I’ve used ext2-through-ext4 for as long as they’ve existed and never lost a file system though back in the ext2 days I had to hand repair them a few times, but ext2 was sufficiently simple that that was not difficult, but within two weeks of turning up a btrfs file system it shit itself in ways I could not recover anything, the entire file system was lost. If I did not have backups, which of course I always do, I would have been completely fuxored. It is my opinion that btrfs and xfs, both of which have advantages, are also both not sufficiently stable for production use.

      • LalSalaamComrade@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        4 days ago

        ext4 is just terrible for the inode issue, because you’ll be forced to reformat and reinstall again. Anyone using NixOS or Guix with multiple store write operation should not go near it.

          • LalSalaamComrade@lemmy.ml
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            4 days ago

            NixOS and ext4 user here with no problems.

            Yet. Just like most of the articles out there, this problem will start showing up around 5 months to a year, depending on how much storage you have and how much nixos-rebuild switch/guix system reconfigure you use - I had around 512GB, so I ran out of inodes quickly, and despite have lots of storage space, the system was unusable for me.

            Here’s the exact issue that even others have talked about:

            TL:DR; is, your NixOS and Guix system will break due to high inode usage, preventing you to access shell even after clearing older generations. In most cases, you can not even clear older generations, simply because you ran out of inodes. More about filesystem has been discussed here.

            • dhhyfddehhfyy4673@fedia.io
              link
              fedilink
              arrow-up
              0
              ·
              3 days ago

              Seems like this can be prevented from reaching that point by properly deleting old generations regularly though right?

              • LalSalaamComrade@lemmy.ml
                link
                fedilink
                arrow-up
                0
                ·
                edit-2
                3 days ago

                No, those inodes still won’t clear on their own - sure, you’ll be able to prolong for a few weeks or months, but then you’ll reach a point where you’ll end up with just a single generation, and you can do nothing to clear space. The device will mislead you with free space, but they are not accessible, neither can you try to force freeing space by running disk operations manually where the stores are present - because a) that’s a bad idea and b) you’ll not have the permission to. That’s what happened to me, and I had to reinstall the entire system again.

                Besides, deleting generations regularly would defeat the point of having a rollback system. Sure, for normal desktop usage, you could live with preserving the last twenty to thirty generations, but this may be detrimental for servers that requires the ability to rollback to every generation possible, or low-end platform constrained with space, and therefore, limited generations.

                • mvirts@lemmy.world
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  edit-2
                  3 days ago

                  20 or 30 generations 😹

                  I have space for 1 😭

                  Edit: you’ve got me worried now, is the behavior you’re referring to normal running out of inodes behavior or some sort of bug? Is this specific to ext4 or does it also affect btrfs nix stores?

                  I’ve run across the information that ext4 can be created with extra inodes but cannot add inodes to an existing filesystem.

      • Possibly linux@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 days ago

        Was that in the last 5 years? If it was btrfs is now far more stable. It has never blown up for me and it has in fact saved my data a few times.

      • secret300@lemmy.sdf.org
        link
        fedilink
        arrow-up
        0
        ·
        4 days ago

        I’ve only had this happen once and it turned out it was because my ram was shitting out errors that were saved to disk so it ended up not being btrfs’s fault

    • Telorand@reddthat.com
      link
      fedilink
      arrow-up
      0
      ·
      4 days ago

      Don’t you have that backwards? This is an atomic distro, and you’d want to mkdir /var/home then symlink /home from that, no? Otherwise, you’ll wind up with a home directory that is immutable.

      • nanook@friendica.eskimo.com
        link
        fedilink
        arrow-up
        0
        ·
        4 days ago

        @Telorand I am not familiar with that distro, I am however familiar with how mount works. As far as what is immutable and what is not, you can set with chattr +i file/directory or chattr -i file/directory.

    • evasync@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      4 days ago

      editing the /etc/fstab didnt work (I just changed the path but not sure if the uuid plays any part) but ill give the rm/mkdir part a go

        • evasync@lemmy.worldOP
          link
          fedilink
          arrow-up
          0
          ·
          4 days ago

          No but I rebooted the system after the change. do still need to update it regardless the reboot?

          • data1701d (He/Him)@startrek.website
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            3 days ago

            Edit: Probably try @[email protected]’s solution of systemctl daemon-reload first.

            Yes. When booting, your system has an initial image that it boots off of before mounting file systems. You have to make sure the image reflects the updated fstab.

              • data1701d (He/Him)@startrek.website
                link
                fedilink
                English
                arrow-up
                0
                ·
                3 days ago

                You might be right. I was thinking of it in terms of a traditional distro, as I use vanilla Debian where my advice would apply and yours probably wouldn’t.

                From what I do know, though, I guess /etc would be part of the writable roots overlaid onto the immutable image, so it would make sense if the immutable image was sort of the initramfs and was read when root was mounted or something. Your command is probably the correct one for immutable systems.

  • Max-P@lemmy.max-p.me
    link
    fedilink
    arrow-up
    0
    ·
    4 days ago

    The error says /home is a symlink, what if you ls -l /home?

    Since this is an atomic distro, /home might be a symlink to /var/home.