I like my Linux installs heavily customized and security hardened, to the extent that copying over /home won’t cut it, but not so much that it breaks when updating Debian. Whenever someone mentions reinstalling Linux, I am instinctively nervous thinking about the work it would take for me to get from a vanilla install to my current configuration.

It started a couple of years ago, when dreading the work of configuring Debian to my taste on a new laptop, I decided to instead just shrink my existing install to match the new laptop’s drive and dd it over. I later made a VM from my install, stripped out personal files and obvious junk, and condensed it to a 30 GB raw disk image, which I then deployed on the rest of my machines.

That was still a bit too janky, so once my configuration and installed packages stabilized, I bit the bullet, spun up a new VM, and painstakingly replicated my configuration from a fresh copy of Debian. I finished with a 24 GB raw disk image, which I can now deploy as a “fresh” yet pre-configured install, whether to prepare new machines, make new VMs, fix broken installs, or just because I want to.

All that needs to be done after dd’ing the image to a new disk is:

  • Some machines: boot grubx64.efi/shimx64.efi from Ventoy and “bless” the new install with grub-install and update-grub
  • Reencrypt LUKS root partition with new password
  • Configure user and GRUB passwords
  • Set hostname
  • Install updates and drivers as needed
  • Configure for high DPI if needed

I’m interested to hear if any of you have a similar workflow or any feedback on mine.

  • Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    14 days ago

    Use configuration tooling such as Ansible.

    You also could build a image builder to build your system. You could utilize things like docker and or Ansible to repeatedly get to the same result.

  • lordnikon@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    14 days ago

    that workflow seems fine if it works for you. seems overkill for debian but if it works i don’t see anything wrong with it.

    one way I do it is dpkg - l > package.txt to get a list of all install packages to feed into apt on the new machine then to setup two stow directories one for global configs. when a change is made and one for dot files in my home directory then commit and push to a personal git server.

    Then when you want to setup a new system it’s install minimal install then run apt install git stow

    then clone your repos grab the package.txt run apt install < package.txt then run stow on each stow directory and you are back up and running after a reboot.

  • data1701d (He/Him)@startrek.website
    link
    fedilink
    English
    arrow-up
    0
    ·
    14 days ago

    You might be able to script something with Debootstrap. I tested Bcachefs on a spare device once and couldn’t get through the standard Debian install process, so I ended up using a live image to Debootstrap the drive. You should be able to give a list of packages to install and copy over configs to the partition.

  • Unmapped@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    14 days ago

    You should check out Nixos. You make a config file that you can just copy over to as many machines as you want.

    • Yeah this is a good use case for it, if I remember right you can also trivially generate a live installer iso from the same nix configuration you’d use to run any usual updates. So you can make a custom installer for your exact configuration and copy that onto a flash drive to bootstrap you into a working environment. I think the live installer would generate something like a hardware-configuration.nix too.

      • thejevans@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        14 days ago

        You could also use nixos-anywhere + disko. This is what I use. If you have SSH and root access to a linux machine, you can live swap to a NixOS installer, load a configuration over SSH, install and reboot. It gives a similar experience to Ansible.

    • 4am@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      14 days ago

      That or Ansible, if you will have a machine to deploy from

      • Possibly linux@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        14 days ago

        You don’t need a machine to deploy from. You just need a git repo and Ansible pull. It will pulldown and run playbooks against the host. (Use the self target to run it on the local machine)

      • TunaCowboy@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        14 days ago

        if you will have a machine to deploy from

        You can run ansible against localhost, so you don’t even need that.

  • boredsquirrel@slrpnk.net
    link
    fedilink
    arrow-up
    0
    ·
    14 days ago

    I did the same, exactly the way you did but my “zygote” isnt as advanced.

    I should make a raw ISO too, but currently I just use Clonezilla (which shrinks and resizes automatically) and have a small SSD with a nearly vanilla system.

    Just because the Fedora ISO didnt boot

  • oscardejarjayes [comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    14 days ago

    You could try using Hashicorp’s Packer to generate images repeatably (usually more meant for cloud images though). Or NixOS (like others have mention), or Guix (like NixOS, but better in some ways, worse in others). You could make it an Ansible playbook, which would let you both make configured images, and just configure machines that already have an OS.

    I do something similar with archiso, fwiw, but that only works with Arch Linux.

    Would you want to change your distribution, or just keep Debian with some tools to automate?

    • Lichtblitz@discuss.tchncs.de
      link
      fedilink
      arrow-up
      0
      ·
      14 days ago

      Ansible playbook is perfect for this. All your configuration is repeatable, whether on a running system or a new one. Plus you can start with a completely fresh newest version image and apply from there, instead of starting from a soon-to-be outdated custom image.

  • darius@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    14 days ago

    I have the exact same workflow except I have two images: one for legacy/MBR and another for EFI/GPT – once I read your post I was glad to see I’m not alone haha!

  • ouch@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    14 days ago

    Just put your system configuration in Ansible playbook. When your distro has new release, go through your changes and remove ones that are no longer relevant.

    For home, I recommend a dotfiles repository with subdirectories for each tool, like bash, git, vim, etc. Use GNU stow to symlink the required files in place on each machine.

  • ezekielmudd@reddthat.com
    link
    fedilink
    arrow-up
    0
    ·
    14 days ago

    I believe that Proxmox does this because I have installed/created containers from their available images. I wonder how they create those container images?