• 12 Posts
  • 30 Comments
Joined 1 year ago
cake
Cake day: June 10th, 2023

help-circle
  • The problem is that games don’t run at all or require major effort to run without issues.

    A major cause for that is the distro - when it comes to gaming, the distro makes a huge difference as I outlined previously. The second major cause is the flavor of Wine you chose (Proton-GE is the best, not sure what you used). The third major cause is checking whether or not the games are even compatible in the first place (via ProtonDB, Reddit etc) - you should do this BEFORE you recommend Linux to a gamer.

    In saying all that, I’ve no idea about pirated stuff though, you’re on your own on that one - Valve and the Wine developers obviously don’t test against pirated copies, and you won’t get much support from the community either.


  • Unfortunately you chose the wrong distro for your friend - Linux Mint isn’t good for gaming - it uses an outdated kernel/drivers/other packages, which means you’ll be missing out on all the performance improvements (and fixes) found in more up-to-date distros. Gaming on Linux is a very fast moving target, the landscape is changing at a rapid pace thanks to the development efforts of Valve and the community. So for gaming, you’d generally want to be on the latest kernel+mesa+wine stack.

    Also, as you’ve experienced, on Mint you’d have to manually install things like Waydroid and other gaming software, which can be a PITA for newbies.

    So instead, I’d highly recommend a gaming-oriented distro such as Nobara or Bazzite. Personally, I’m a big fan of Bazzite - it has everything you’d need for gaming out-of-the-box, and you can even get a console/Steam Deck-like experience, if you install the -deck variant. Also, because it’s an immutable distro with atomic updates, it has a very low chance of breaking, and in the rare ocassion that an update has some issues - you can just select the previous image from the boot menu. So this would be pretty ideal for someone who’s new to Linux, likes to game, and just wants stuff to work.

    In saying that, getting games to run in Linux can be tricky sometimes, depending on the game. The general rule of thumb is: try running the game using Proton-GE, and if that fails, check Proton DB for any fixes/tweaks needed for that game - with this, you would never again have to spend hours on troubleshooting, unless you’re playing some niche game that no one has tested before.


  • Bazzite. Here’s why:

    • Optimised for gaming (gaming optimised kernel, common tweaks pre-applied, all common gaming apps pre-installed like Steam, Mangohud etc)
    • All necessary drivers pre-installed (game controllers, RGB, and even proprietary nVidia)
    • A Steam-Deck like gaming experience, if you want (the Deck variant boots directly to Steam)
    • Immutable and atomic (image-based OS updates, so updates either work or don’t - there’s no chance of a broken state)
    • Easy rollbacks (just select the previous image in the GRUB menu)

    But since you said:

    how to squeeze the best performance out of this

    and if you’re really serious about squeezing the best performance, then check out the Arch-based CachyOS - unlike most other Linux distros, Cachy has optimised x86-64-v3 and v4 packages in their repos, which means apps can make use of advanced CPU instructions such as SSE3, AVX512 etc. Most other Linux distros on the other hand still use x86-64-v1 for compatibility reasons, which unfortunately means that you’d be missing out on all the cool new optimised CPU instructions introduced over the past 16 years.

    You can read more about microarchitecture levels (aka MARCH) here: https://en.wikipedia.org/wiki/X86-64#Microarchitecture_levels

    In addition to the MARCH, Cachy’s packages have other optimisations such as LTO/PGO, optimised kernel with the BORE and Rusty schedulers which are better for gaming, plus several performance-oriented tweaks which you’d otherwise have to do manually on Arch (such as makepkg.conf tweaks, pacman.conf tweaks etc).

    Finally, Cachy are always on the bleeding edge when it comes to gaming/driver/kernel/performance related stuff, so you’ll get all the good stuff even before Bazzite or other optimised distros. For instance, Cachy was the first distro to include the new nVidia driver which has explicit sync support for better Wayland compatibility, and they’re always on top of major Arch developments and provide detailed announcements which are relevant to gamers and performance freaks.

    Eg, here’s their recent recent nVidia announcement:

    Hi @here,

    as you maybe noticed, we have rolled out the new NVIDIA Driver, which includes the explicit sync protocol and tearing for Vulkan. We have been prioritized to move this forward to finally resolve the wayland situation. Additionally arch has pushed CUDA to 12.5, which is NOT compatible with the current 550 driver (it needs the 555 Driver).

    The beta driver is not perfect, but so far we are applying some fixes to avoid issues and restore performance problems with disabling the GSP Firmware load. This is handled via the “cachyos-settings” package.

    Anyways, since some people maybe have problems with this driver, here is a short instruction to manually downgrade and block the driver:

    […]

    If you are facing issues with the new NVIDIA Driver, reproduce the issues and then run “sudo nvidia-bugreport.sh” and report it to their forum: https://forums.developer.nvidia.com/c/gpu-graphics/linux/148

    We are also shipping now an precompiled nvidia-open module. This will be also as default installed for users, which have supported cards as soon NVIDIA releases the 560 drivers.

    The CachyOS Team

    So as you can see, they’re pretty on to it with this sorta stuff.

    Now the Bazzite team are also like the Cachy guys and keep up with this stuff, but because they’re based on Fedora, they can’t be as bleeding edge or as optimised as Arch. So it’s up to you - if you prefer stability, a primarily gaming-focused optimisations, and want something that “just works” then get Bazzite; or if you want an ultra-optimised distro to squeeze out the most performance out of your box but also don’t mind ocassionally diving into the terminal and getting your hands dirty, then get CachyOS.

    cc: @[email protected]







  • d3Xt3r@lemmy.nzMtoLinux@lemmy.mlThoughts on CachyOS?
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    7 months ago

    No need to hop around for the same thing.

    It’s not really the same thing. EndeavourOS is basically vanilla Arch + a few branding packages. CachyOS is an opionated Arch with optimised packages.

    You still have the option to select the DE and the packages you want to install - just like EndeavourOS - but what sets Cachy apart is the optimisations. For starters, they have multiple custom kernel options, with the BORE scheduler (and a few others), LTO options etc. Then they also have packages compiled for the x86-64-v3 and v4 architectures for better performance.

    Of course, you could also just use Arch (or EndeavourOS) and install the x86-64-v3/v4 packages yourself from ALHP (or even the Cachy repos), and you can even manually install the Cachy kernel or a similar optimised one like Xanmod. But you don’t get the custom configs / opinionated stuff. Which you many actually not want as a veteran user. But if you’re a newbie, then having those opinionated configs isn’t such a bad idea, especially if you decide to just get a WM instead of a DE.

    I’ve been thru all of the above scenarios, depending on the situation. My homelab is vanilla Arch but with packages from the Cachy repo. I’ve also got a pure Cachy install on my gaming desktop just because I was feeling lazy and just wanted an optimised install quickly. They also have a gaming meta package that installs Steam and all the necessary 32-bit libs and stuff, which is nice.

    Then there’s Cachy Browser, which is a fork of LibreWolf with performance optimisations (kinda similar to Mercury browser, except Mercury isn’t MARCH optimised).

    As for support, their Discord is pretty active, you can actually chat with the developers directly, and they’re pretty friendly (and this includes Piotr Gorski, the main dev, and firelzrd - the person behind the BORE scheduler). Chatting with them, I find the quality of technical discussions a LOT higher than the Arch Discord, which is very off-topic and spammy most of the time.

    Also, I liked their response to Arch changes and incidents. When Arch made the recent mkinitcpio changes, their made a very thorough announcement with the exact steps you needed to take (which was far more detailed than the official Arch announcement). Also, when the xz backdoor happened, they updated their repos to fix it even before Arch did.

    I’ve also interacted with the devs pesonally with various technical topics - such as CFLAG and MARCH optimisations, performance benchmarking etc, and it seems like they definitely know their stuff.

    So I’ve full confidence in their technical ability, and I’m happy to recommend the distro for folks interested in performance tuning.

    cc: @[email protected]


  • Others here have already given you some good overviews, so instead I’ll expand a bit more on the compilation part of your question.

    As you know, computers are digital devices - that means they work on a binary system, using 1s and 0s. But what does this actually mean?

    Logically, a 0 represents “off” and 1 means “on”. At the electronics level, 0s may be represented by a low voltage signal (typically between 0-0.5V) and 1s are represented by a high voltage signal (typically between 2.7-5V). Note that the actual voltage levels, or what is used to representation a bit, may vary depending on the system. For instance, traditional hard drives use magnetic regions on the surface of a platter to represent these 1s and 0s - if the region is magnetized with the north pole facing up, it represents a 1. If the south pole is facing up, it represents a 0. SSDs, which employ flash memory, uses cells which can trap electrons, where a charged state represents a 0 and discharged state represents a 1.

    Why is all this relevant you ask?

    Because at the heart of a computer, or any “digital” device - and what sets apart a digital device from any random electrical equipment - is transistors. They are tiny semiconductor components, that can amplify a signal, or act as a switch.

    A voltage or current applied to one pair of the transistor’s terminals controls the current through another pair of terminals. This resultant output represents a binary bit: it’s a “1” if current passes through, or a “0” if current doesn’t pass through. By connecting a few transistors together, you can form logic gates that can perform simple math like addition and multiplication. Connect a bunch of those and you can perform more/complex math. Connect thousands or more of those and you get a CPU. The first Intel CPU, the Intel 4004, consisted of 2,300 transistors. A modern CPU that you may find in your PC consists of hundreds of billions of transistors. Special CPUs used for machine learning etc may even contain trillions of transistors!

    Now to pass on information and commands to these digital systems, we need to convert our human numbers and language to binary (1s and 0s), because deep down that’s the language they understand. For instance, in the word “Hi”, “H”, in binary, using the ASCII system, is converted to 01001000 and the letter “i” would be 01101001. For programmers, working on binary would be quite tedious to work with, so we came up with a shortform - the hexadecimal system - to represent these binary bytes. So in hex, “Hi” would be represented as 48 69, and “Hi World” would be 48 69 20 57 6F 72 6C 64. This makes it a lot easier to work with, when we are debugging programs using a hex editor.

    Now suppose we have a program that prints “Hi World” to the screen, in the compiled machine language format, it may look like this (in a hex editor):

    As you can see, the middle column contains a bunch of hex numbers, which is basically a mix of instructions (“hey CPU, print this message”) and data (“Hi World”).

    Now although the hex code is easier for us humans to work with compared to binary, it’s still quite tedious - which is why we have programming languages, which allows us to write programs which we humans can easily understand.

    If we were to use Assembly language as an example - a language which is close to machine language - it would look like this:

         SECTION .data
    msg: db "Hi World",10
    len: equ $-msg
    
         SECTION .text
         
         global main   
    main:
         mov  edx,len
         mov  ecx,msg
         mov  ebx,1
         mov  eax,4
    
         int  0x80
         mov  ebx,0
         mov  eax,1
         int  0x80
    

    As you can see, the above code is still pretty hard to understand and tedious to work with. Which is why we’ve invented high-level programming languages, such as C, C++ etc.

    So if we rewrite this code in the C language, it would look like this:

    #include <stdio.h>
    int main() {
      printf ("Hi World\n");
      return 0;
    } 
    

    As you can see, that’s much more easier to understand than assembly, and takes less work to type! But now we have a problem - that is, our CPU cannot understand this code. So we’ll need to convert it into machine language - and this is what we call compiling.

    Using the previous assembly language example, we can compile our assembly code (in the file hello.asm), using the following (simplified) commands:

    $ nasm -f elf hello.asm
    $ gcc -o hello hello.o
    

    Compilation is actually is a multi-step process, and may involve multiple tools, depending on the language/compilers we use. In our example, we’re using the nasm assembler, which first parses and converts assembly instructions (in hello.asm) into machine code, handling symbolic names and generating an object file (hello.o) with binary code, memory addresses and other instructions. The linker (gcc) then merges the object files (if there are multiple files), resolves symbol references, and arranges the data and instructions, according to the Linux ELF format. This results in a single binary executable (hello) that contains all necessary binary code and metadata for execution on Linux.

    If you understand assembly language, you can see how our instructions get converted, using a hex viewer:

    So when you run this executable using ./hello, the instructions and data, in the form of machine code, will be passed on to the CPU by the operating system, which will then execute it and eventually print Hi World to the screen.

    Now naturally, users don’t want to do this tedious compilation process themselves, also, some programmers/companies may not want to reveal their code - so most users never look at the code, and just use the binary programs directly.

    In the Linux/opensource world, we have the concept of FOSS (free software), which encourages sharing of source code, so that programmers all around the world can benefit from each other, build upon, and improve the code - which is how Linux grew to where it is today, thanks to the sharing and collaboration of code by thousands of developers across the world. Which is why most programs for Linux are available to download in both binary as well as source code formats (with the source code typically available on a git repository like github, or as a single compressed archive (.tar.gz)).

    But when a particular program isn’t available in a binary format, you’ll need to compile it from the source code. Doing this is a pretty common practice for projects that are still in-development - say you want to run the latest Mesa graphics driver, which may contain bug fixes or some performance improvements that you’re interested in - you would then download the source code and compile it yourself.

    Another scenario is maybe you might want a program to be optimised specifically for your CPU for the best performance - in which case, you would compile the code yourself, instead of using a generic binary provided by the programmer. And some Linux distributions, such as CachyOS, provide multiple versions of such pre-optimized binaries, so that you don’t need to compile it yourself. So if you’re interested in performance, look into the topic of CPU microarchitectures and CFLAGS.

    Sources for examples above: http://timelessname.com/elfbin/


  • This shouldn’t even be a question lol. Even if you aren’t worried about theft, encryption has a nice bonus: you don’t have to worry about secure erasing your drives when you want to get rid of them. I mean, sure it’s not that big of a deal to wipe a drive, but sometimes you’re unable to do so - for instance, the drive could fail and you may not be able to do the wipe. So you end up getting rid of the drive as-is, but an opportunist could get a hold of that drive and attempt to repair it and recover your data. Or maybe the drive fails, but it’s still under warranty and you want to RMA it - with encryption on, you don’t have to worry about some random accessing your data.








  • I am not a fan because they install all that WINE stuff on the system level which is a huge security degradation.

    I disagree with this. Sure, it could be made more secure, but Wine, on it’s own isn’t, any greater security risk compared to any other scripting runtime such as say Python, which is also installed at the system level. Ultimately it’s up to the user to get their executables from trustworthy sources - and whether it’s a random bash script or an exe, doesn’t really make a difference.

    As for Firefox, if you’re truly concerned about security then you wouldn’t be using it in the first place, you’d be using Librewolf, which you can install without any issues.


  • I am! I run it both on my gaming PC and laptop.

    But it doesn’t seem like a “typical” distro for a daily driver? How does Bazzite for example differ from Nobara which is another gaming-oriented distro?

    Well, for starters, if you get the Bazzite-deck edition, your PC boots straight into Steam’s game mode - in this mode, everything runs thru gamescope so you get all the awesome benefits like being able to use FSR even with games that don’t support it, HDR and more. You get a console-like experience on PC, and it’s awesome.

    Another cool thing about this mode is that all your updates - including OS, Flatpak, firmware/BIOS, container, Nix, pip etc - all of it is presented as if it’s a Steam update like in SteamOS - and it’s automatic too, and it doesn’t interrupt your gaming experience. Basically a unified update backend and frontend, which is awesome.

    Compared to Fedora/Nobara, one advantage this has is that the updates are image based and atomic, so when you reboot, the new update goes live instantly so there’s no wait-time. Another advantage is that your previous image is available in the GRUB menu, so in case the update broke something, you can always boot from the previous image - no need to even restore anything, no need to edit your fstab etc (unlike btrfs snapshot restores where the subvolid changes). And you can also pin “good” images to your GRUB menu (and I highly recommend doing that), so you can always fall back to a known good version. This came in handy on my laptop recently where after one of the Feb updates I was experiencing some weird graphics corruption in game mode, but thanks to image pinning I always had a working image to fall back to. Also, the rebase feature allows you to go back and forth between 90 days of images (stored on github), so it’s easy to switch between various versions for testing. The rebase is also interesting because with just a single command you can switch between any other Fedora Atomic distro, so if you’re bored of Bazzite or you want to try out a new DE, it’s just one command to switch. And with pinning, you can always switch back instantly.

    Finally, there’s the whole immutability aspect. Personally I’m ambivalent on this, but the fact that it allows image/atomic updates (with easy rollbacks/rebases), I think of it more as a convenience - especially on a gaming-oriented machine, where I just wanna jump straight into my games without worrying about updates and broken systems.

    So having used Fedora, Nobara, and finally Bazzite, I can highly recommend Bazzite as a daily driver - and it’s 100% worth switching. AMA.







  • I doubt this is an Akko-specific issue- most keyboards should be using the standard USB HID drivers built into the kernel. This has most likely got something to do with your DE or distro config, maybe an error in a config file somewhere, or some script/plugin behaving funky. I know in the past KDE’s Snap Assist plugin was known to cause the keyboard to stop working; kwin scripts could also do weird things. Or could be a third-party program, like a keyboard remapper (kmonad, wayland-mouse-mapper, kbct etc).

    You could try switching to a different DE temporarily to rule out a DE issue, but before you do that, maybe boot from a live USB of a different DE or distro (or maybe even try two ISOs of your current distro - one ISO with whatever DE you’re using currently, and another ISO with a different DE) and see if it works in there? You could create a Ventoy live USB to make it easy - just dump all the different ISOs on the drive and you can select which them when booting.

    If, in your testing, you find that your keyboard works fine with the same distro and DE, then it would point to a config issue. In that case, the easiest fix is to just blow your .config folders away (or create a new user account) and start fresh.

    But if in your testing you find that the keyboard works under a different DE but not the one you’re using, then it’s likely a bug with the DE, so perhaps consider filing a bug report. But maybe try the same DE with a different distro first to make sure it’s not a distro-specific bug.


  • matching other programs and platforms

    Actually, Ctrl+C is the interrupt hotkey for pretty much every CLI app/terminal on every platform. Try it within the Command Prompt/PowerShell/Windows Terminal, or the macOS terminal - they’ll all behave the same.

    The use of Ctrl+C as an interrupt/termination signal has a very long history even predating the old UNIX days and DEC - it goes back to the days of early telecommunications, where control characters were used for controlling the follow of data through telecommunication lines. These control characters, along with regular characters, were transmitted by being encoded in binary, and this encoding scheme was defined by ASCII (American Stanard Code for Information Interchange), published in 1963.

    In ASCII, the control character ETX (meaning end-of-text; represented by the hex code 0x03) was used to indicate “this segment of input is over”, or “stop the current processing”.

    Now what does all this have to do with with Ctrl+C you ask?

    For that, you’ll need to go back to the days of early keyboards. Keyboards back then generated ASCII codes directly, and when a modifier key (Ctrl/Shift/Meta) on a keyboard was pressed in combination with another key, it modified the signal sent by the keyboard to produce a control character.

    Specifically, pressing Ctrl with a letter key made the keyboard clear (set to zero) the upper three bits of the binary code of the letter, thus effectively mapping the letter keys to control characters (0x00 - 0x1F: the first 32 characters on the ASCII table).

    • The ASCII code for ‘C’ is 0x43 (binary 01000011).
    • Pressing Ctrl+C clears the upper three bits, resulting in 00000011, which is 0x03 in hex.

    And would you look at that, 0x03 is the code which represents the control character ETX.

    The use of ETX to interrupt a program in digital computers was first adopted by the TOPS-10 OS, which ran on DEC’s PDP-10 computer, back in the late 60s. It’s successor, TOPS-20 also included it, followed by the RSX-11 (on the PDP-11), and VMS (on the VAX-11).

    RSX-11 was a very influential OS, created by a team that included David Cutler. It influenced the design of several OSes that followed, such as VMS and Windows NT. Cutler later moved to Microsoft and became the father of Windows NT. Early NT did not include a GUI, so it was natural to adopt existing terminal operation standards, including the use of ETX. In fact, NT’s internals were so similar to VMS that a lawsuit was in the works, but instead, MS agreed to pay off DEC millions of $$$.

    Also, when UNIX first came out (1969), it ran on DEC hardware, and so they followed the tradition of using the ETX signal to stop programs. This convention flowed to BSD (1978) which was based on UNIX, and NeXTSTEP (1989), which was based on BSD. NeXTSTEP was developed by NeXT Computers, which was founded by Steve Jobs… and the rest is history.

    Therefore, Ctrl+C is something that’s deeply rooted in history. You don’t just simply change something like that. Sure, you may be able to remap the keybindings, but it’s actually hardcoded into many programs so you’ll run into inconsistencies - that is, if you used the standard remapping tools built into GNOME/KDE etc.

    If you want to truly remap Ctrl+C, you’ll want to do so at a lower level (evdev layer) so that it’s not intercepted by other programs, eg using tools like evremap or keyd. But even then, it’s not guaranteed to work everywhere, for instance, if you’re inside a VM or using a different OS, or in a remote session. So it’s best to remap the keys at the keyboard layer itself, which is possible on many popular mechanical keyboards using customisable firmware like QMK/VIA.