𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍

       🅸 🅰🅼 🆃🅷🅴 🅻🅰🆆. 
 𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍 𝖋𝖊𝖆𝖙𝖍𝖊𝖗𝖘𝖙𝖔𝖓𝖊𝖍𝖆𝖚𝖌𝖍 
  • 0 Posts
  • 113 Comments
Joined 2 years ago
cake
Cake day: August 26th, 2022

help-circle
  • I thought the point of lts kernels is they still get patches despite being old.

    Well, yeah, you’re right. My shameful admission is that I’m not using LTS because I wanted to play with bcachefs and it’s not in LTS. Maybe there’s a package for LTS now that’d let me at it, but, still. It’s a bad excuse, but there you go.

    I think a lot of people also don’t realize that most of the performance issues have been worked around, and if RedoxOS is paying attention to advances in the microkernel field and is not trying to solve every problem in isolation, they could end up with close to monolithic kernel performance. Certainly close to Windows performance, and that seems good enough for Industry.

    I don’t think microkernels will ever compete in the HPC field, but I highly doubt anyone complaining about the performance penalty of microkernel architecture would actual notice a difference.


  • Yeah, that’s just shit behavior. You often see this from sophomores - people who were themselves newbs a short while ago and now thing they’re experts. It’s just people, man.

    I don’t know why anyone couldn’t remove whatever they wanted, as long as they looked carefully at the list of other things that are going to be removed and didn’t notice anything they recognize and want to keep. There are no distributions I know of that will let you remove dependencies without telling you what they’re needed for. There are several distributions where you tell the package manager to remove something, and everything that depends on it, and not ask you to confirm anything. But, then, all Linuxes will let you sudo rm -rf /, too.

    Nobody should have to get any answer to this question other than: “remove whatever you want; just pay attention to what the package manager is telling you.”



  • This particular issue could be solved in most cases in a monolithic kernel. That it isn’t, is by design. But it’s a terrible design decision, because it can lead to situations where (for example) a zombie process locks a mount point and prevents unmounting because the kernel insists it’s still in use by the zombie process. Which the kernel provides no mechanism for terminating.

    It is provable via experiment in Linux by use of fuse filesystems. Create a program that is guaranteed to become a zombie. Run it within a filesystem mounted by an in-kernel module, like a remote nfs mount. You now have a permanently mounted NFS mount point. Now, use mount something using fuse, say a WebDAV remote point. Run the same zombie process there. Again, the mount point is unmountable. Now, kill the fuse process itself. The mount point will be unmounted and disappear.

    This is exactly how microkernels work. Every module is killable, crashable, upgradable - all without forcing a reboot or affecting any processes not using the module. And in a well-designed microkernel, even processes using the module can in many cases continue functioning as if the restarted kernel module never changed.

    Fuse is really close to the capabilities of microkernels, except it’s only filesystems. In a microkernel, nearly everything is like fuse. A linux kernel compiled such that everything is a loadable module, and not hard linked into the kernel, is close to a microkernel, except without the benefits of actually being a microkernel.

    Microkernels are better. Popularity does not prove superiority, except in the metric of popularity.



  • ORLY.

    Do explain how you can have micro kernel features on Linux. Explain, please, how I can kill the filesystem module and restart it when it bugs out, and how I can prevent hard kernel crashes when a bug in a kernel module causes a lock-up. I’m really interested in hearing how I can upgrade a kernel module with a patch without forcing a reboot; that’d really help on Arch, where minor, patch-level kernel updates force reboots multiple times a week (without locking me into an -lts kernel that isn’t getting security patches).

    I’d love to hear how monolithic kernels have solved these.


  • I have no experience with this, but frankly, in a social situation like this, or in fact, any other, honesty is a good bet. “Start as you mean you go on.” You can do this quite nicely, and if they’re still insensitive that they take it personally and get offended, maybe that’s a good red flag?

    Say, “it’s been great talking to you; I have things I need to do now, but I really look forward to talking to you tomorrow!” You can tailor it to how enthusiastic you are, where you are in the relationship, more or less flirty, more or less suggestive. The main thing is to simply be honest - you need to focus on other things, get some sleep, clean the dishes, eat, walk and feed the dog… you have life that needs taking care of away from your phone. And if you really are eager to continue talking tomorrow, saying that can really boost someone’s confidence.

    It can be a fast wind down. Especially if you’re at a place in the relationship where you can make a date. “I’ll be done with work and able to focus on you after 5 - TTYT?” Or if you’re unsure about how far you want to take it, leave it open; tell them to text you when they find time.

    Honesty is almost always the right answer.




  • I’m an American. I do understand the cost of re-entering the EU; given how clearly abysmal the decision was, why is no party talking about a re-join process? Is it because many of Labour’s base were Leavers? Is it something that might come up if they have a couple of successful terms? Is it political cyanide?

    Why, when Brexit is clearly unpopular, has had directly and observable damage to the British economy, and was a shock to everyone that it passed (not least the protest voters, which we’re struggling with over here ourselves) - why is no-one bringing up a Join effort?

    ELIaA (explain like I’m an American)




  • Also fake because zombie processes.

    I once spent several angry hours researching zombie processes in a quest to kill them by any means necessary. Ended up rebooting, which was a sort of baby-with-the bath-water solution.

    Zombie processes still infuriate me. While I’m not a Rust developer, nor do I particularly care about the language, I’m eagerly watching Redox OS, as it looks like the micro kernel OS with the best chance to make to it useful desktop status. A good micro kernel would address so so many of the worst aspects of Linux.


  • That article is an excellent resource, BTW, thank you. However, it nowhere says anything about swapping being used when you have more memory than you use.

    1TB of memory is not a lot, for many applications, so just saying “this guy has 1TB memory and look what he thinks of swap” doesn’t mean much. If he’s processing LLMs or really any non-trivial DB (read: any business DB), then that memory is being used.

    Having space in memory so that you never have to swap is always better than needing to swap, and nothing in Chris’ article says anything counter to that. What he mainly argues is that swap is better than OOM killers, having configurations that lead to memory contention in the first place, or seeking alternative strategies to turning off swap.

    The fact is, I could turn on swap, but it would never get used because I’m not doing anything that requires heavy memory use. Even running KDE and several Java and Electron apps, I wouldn’t run out of physical memory. I’ll run into CPU constraints long before I run into memory contention issues.

    Frankly, if my system allowed me to have, say, 40GB instead of 64, I’d have done that. I only want to not have to use swap - because never using swap is always preferable to needing it - and slightly more than 32GB is where I happen to land. But I can only have symmetric memory modules, and all memory comes in powers-of-2 sizes, and 64GB is affordable.

    Again, Chris’ essay says only that swap is better than many alternatives people seek; not that swap is better than being able to not exhaust physical RAM.

    As a final point, the other type of swapping is between types of physical memory - between L1 and L2, and between cache and main memory. That’s not what Chris is talking about, nor what the swappiness tuning the OP article is discussing. Those are the swapping between memory and persistent storage.



  • Don’t get me wrong; I love this. This is fantastic. However, I have only one thing to say: mhwahahahahahhaa!

    The last time I upgraded my desktop computer, I said “F it” and maxed out the RAM and put 64GB in it. It’s an AMD with integrated GPU that immediately takes over 2GB RAM – and I still have yet to do anything that has caused it to drop below 50% free memory. It’s exhilarating.

    TBF, I spent years on a more memory-constrained laptop and my workflow became centered around minimalism: tiling WM, no DE, mostly terminal clients for everything but the web. When I got the new computer, with wild abandon I tried all the gluttons: KDE, Gnome… you know, all of them. The eye candy just wasn’t worth the PITA of the mousie-ness of them, and I eventually went back to Herbstluftwm and my shells. Now, when I do run greedy apps - usually some Electron crap - what bugs me is the constant CPU suck even at idle, so I find a shell alternative.

    I guess it’s an irony that I live in a land of memory plenty and never need more than half what I have available. But I still get a little thrill when I do notice my memory use and I’ve got 70% free. Makes me want to code up a little program with an intentional memory plenty leak, just for fun, y’know?