• 0 Posts
  • 447 Comments
Joined 2 years ago
cake
Cake day: June 10th, 2023

help-circle
  • What’s your GPU? Nvidia’s you will need to use the proprietary drivers, AMD it depends on how old it is but newer ones should be good with the default driver.

    From the issues you mentioned on Ubuntu I think it’s likely you have an Nvidia since it doesn’t play completely nice with Wayland all of the time, which sucks because X11 is halfway out of the window.

    Another thing I think you probably know but just in case, you can install different Desktop Environments on the same distro, no need to change distros for that. So you could install Plasma (and yes, Plasma is KDE) or Gnome on your existing mint installation.

    Honestly I think Mint is great for beginners and if you’re happy with it there’s no reason to switch. One thing I always recommend though is keeping /home in a separate partition so you can reinstall or switch distros without deleting your data.


  • When I started my home server was an old laptop, eventually it became an old desktop, and now it’s server specific hardware. My recommendation is use whatever you have at hand unless you have specific reasons. I went from laptop to desktop because I needed more disk space, and went to specialized hardware for practical reasons (less space, less electricity, easily accessible hot swappable hard drives). But for most of the stuff I have there an old laptop would still be enough, heck, a raspberry pi would be enough for most of it.




  • First to answer your main question if I were you I would try NixOS, because it’s declarative so it’s essentially impossible to break, i.e. if it breaks for whatever reason a fresh reinstall will get you back to exactly where you were.

    That being said, I know it’s anecdotal but I have been using Arch for (holy crap) 15 years, and I’ve never experienced an update breaking my system. I find that most of the time people complain about Arch breaking with an update they’re either not using Arch (but Manjaro, Endeavor, etc) and rely heavily on AUR which one should specifically not do, much less on Arch derivatives. The AUR is great, but there’s a reason those packages are not on the main repos, don’t use any system critical stuff from them and you should be golden. Also try to figure out why stuff broke when it did, you’ll learn a lot about what you’re doing wrong on your setup because most people would have just updated without any issues. Otherwise it really doesn’t matter which distro you choose, mangling a distro with manual installations to the point where an upgrade breaks them can be done on most of them, and going for a fully immutable one will be very annoying if you’re so interested in poking at the system.


  • The AUR is essentially a non-curated repository of scripts named PKGBUILD which perform some actions and build a package pacman can install. The expected way to use it is to download the PKGBUILD file to a folder, read it to ensure it is not malicious and run makepkg which will generate a package you can install with pacman.

    That being said most people use a helper which does all of that automatically. My recommendation is to install yay or paru using the process I mentioned above to understand it, and from then on use that program to install new stuff. Both of them are drop-in replacements for pacman so you can use them for all package installation.



  • Besides what the other person said, there’s also the whole treating Linux users as second class citizens. If they didn’t had a launcher for Windows, then it wouldn’t be that big of a problem, but the fact that they did created a launcher for Windows years ago and porting it to Linux has been the most upvoted feature request since then and they haven’t done it is a slap in the face of a community that shares a lot of their beliefs. Valve is investing money on making Linux gaming a reality, GoG won’t even port their launcher to Linux, despite not caring for a launcher I know who I’m giving my money.


  • I use characters from whichever book I’m reading at the time. Examples:

    • Arya: From ASOIAF, a small but powerful Ultrabook
    • Cthulhu: From HP Lovecraft, a huge 17" laptop
    • Horus: From the Horus Heresy books, A powerful laptop
    • Binky: Death’s white horse from discworld, a white desktop
    • Peaches: A rat that always carries a book with her. My home server

  • If all you care is money, then it’s even less on hertzner at 48/year. But the reason I recommended Borgbase is because it’s a bit more known and more trustworthy. $8 a year is a very small difference, sure it will be more than that because, like you said, you won’t use the full TB on B2, but still I don’t think it’ll get that different. However there are some advantages to using a Borg based solution:

    • Borg can do backup to multiple places at once, so you can have the same thing do a backup to the cloud and to some secondary disk
    • Borg is an open source tool, so you can run your own Borg server, which means you can have backups sent to your desktop
    • Again, because Borg is open you can run a raspberry pi with a 1TB usb disk for backup, and that would be cheaper than any solution
    • Or you could even pair with a friend hosting their backup on your server and he doing the same for you.

    And the most important part, migrating from one to the other is simple, just changing config, so you can start with Borgbase, and in a year buy a minicomputer to leave on your parents house and having all of the config changes needed in seconds. Whereas migrating away from B2 will involve a secondary tool. Personally I think that this flexibility is worth way more than those $8/year.

    Also Borg has deduplication, versioning and cryptography, I think B2 has all of that but I’m not entirely sure, because it’s my understanding that they duplicate the entire file when some changes happen so you might end up paying lots more for it.


    As for the full system backup I still think it’s not worth it, how do you plan on restoring it? You would probably have to plug a liveusb and perform the steps there, which would involve formating your disks properly, connect to the remote server and get your data, chroot into it and install a bootloader. It just seems easier to install the OS and run a script, even if you could shave off 5 minutes if everything worked correctly in the other way and you were very fast doing stuff.

    Also your system is constantly changing files, which means more opportunities for files to get corrupted (a similar reason why backing up the folder of a database is a worse idea than backing um a dump of it), and some files are infinite, e.g. /dev/zero or /dev/urandom, so you would need to be VERY careful around what to backup.

    At the end of the day I don’t think it’s worth it, how long do you think it takes you to install Linux on a machine? Because I would guess around 20 min, restoring your 1TB backup will certainly take much longer than that (probably a couple of hours) and if you have the system up you can get critical stuff that doesn’t require the full backup early. Another reason why Borg is a good idea, you can have a small critical stuff backup to restore in seconds, and another repository for the stuff that takes longer. So Immich might take a while to come back, but authentik and Caddy can be up in seconds. Again, I’m sure B2 can also do this, but probably not as intuitively.


  • I figure the most bang for my buck right now is to set up off-site backups to a cloud provider.

    Check out Borgbase, it’s very cheap and it’s an actual backup solution, so it offers some features you won’t get from Google drive or whatever you were considering using e.g. deduplication, recover data at different points in time and have the data be encrypted so there’s no way for them to access it.

    I first decided to do a full-system backup in the hopes I could just restore it and immediately be up and running again. I’ve seen a lot of comments saying this is the wrong approach, although I haven’t seen anyone outline exactly why.

    The vast majority of your system is the same as it would be if you install fresh, so you’re wasting backup space in storing data you can easily recover in other ways. You would only need to store changes you made to the system, e.g. which packages are installed (just get the list of packages then run an install on them, no need to backup the binaries) and which config changes you made. Plus if you’re using docker for services (which you really should) the services too are very easy to recover. So if you backup the compose file and config folders for those services (and obviously the data itself) you can get back in almost no time. Also even if you do a full system backup you would need to chroot into that system to install a bootloader, so it’s not as straightforward as you think (unless your backup is a dd of the disk, which is a bad idea for many other reasons).

    I then decided I would instead cherry-pick my backup locations instead. Then I started reading about backing up databases, and it seems you can’t just back up the data directory (or file in the case of SQLite) and call it good. You need to dump them first and backup the dumps.

    Yes and no. You can backup the file completely, but it’s not a good practice. The reason is that if the file gets corrupted you will lose all data, whereas if you dumped the database contents and backed that up is much less likely to corrupt. But in actuality there’s no reason why backing up the files themselves shouldn’t work (in fact when you launch a docker container it’s always an entirely new database pointed to the same data folder)

    So, now I’m configuring a docker-db-backup container to back each one of them up, finding database containers and SQLite databases and configuring a backup job for each one. Then, I hope to drop all of those dumps into a single location and back that up to the cloud. This means that, if I need to rebuild, I’ll have to restore the containers’ volumes, restore the backups, bring up new containers, and then restore each container’s backup into the new database. It’s pretty far from my initial hope of being able to restore all the files and start using the newly restored system.

    Am I going down the wrong path here, or is this just the best way to do it?

    That seems like the safest approach. If you’re concerned about it being too much work I recommend you write a script to automate the process, or even better an Ansible playbook.


  • Care to explain for the uninitiated like me? It feels like a meme, but conceptually an Option<Result> is very different from a Result<Option>, maybe I’m overthinking but to me an Option<Result> None means no action was taken (e.g. a function that runs every loop to take an action every second will return None most times and Some when it executes), whereas an Ok(None) means an action was taken and it has nothing to return (e.g. a function that safely reads a value from a JSON file, it didn’t failed reading the file so it’s an Ok, but the value wasn’t there so it’s None).


  • If I’m traveling I usually don’t mind, but if it becomes late I just say I’m tired and go back to the hotel. Which is true since usually when I travel for work it’s too many timezones away, so I’m zombified very early and it shows.

    That being said one time there was a local celebration during working hours, we all went to the pub, we were having beers and snacks when 18h rolls over and I stand up, say my goodbyes and start to leave, most just wave me and that was that, but one of those extremely social guys said “where do you think you’re going?” to which I replied “it’s 18h, work is over” as I was walking through the door. I could hear people laughing before the door was fully shut and no one cared one bit about it. Most people who enjoy those sort of things will enjoy them even if you’re not there, so don’t ruin your day and your sleep to try to please others.


  • I use a 40% (corne keyboard specifically). Before that I had a 60% (Redrafon K530). Neither is bad but I prefer my crone for typing and programming by a LONG shot.

    When you look at it it seems like a 40% would be too small for typing, but in reality it’s much more efficient because you have layers, so for example with one button my right hand is now standing on a numpad, and with a different button it’s symbols, both of which are much harder to reach on a 60% or even a full sized keyboard. This (and other videos from this guy) pushed me over the edge to build my own keyboard, and I’m really glad I did.

    Edit: since you asked about arrows on my keyboard you press a button and esdf become arrows. Why not wasd you might ask, and the answer is that esdf is just like wasd but in the position where your hand is resting when typing.





  • It can be generated as an excel file or as another file type which we cannot use.

    This is probably a dumb question, and there’s likely a very good reason why this can’t be done, but can you not generate an Excel file from one of the other formats yourself? E.g. have the program output a CSV and write a python script that parses it into an excel file. That way you might have more control over the generated Excel and maybe be able to do it automatically.