I’m the administrator of kbin.life, a general purpose/tech orientated kbin instance.

  • 0 Posts
  • 168 Comments
Joined 1 year ago
cake
Cake day: June 29th, 2023

help-circle
  • All of this is layman with some basic understanding only.

    So, on the one hand in our galaxy alone there are between 100 and 400 billion stars (wikipedia), now a lot of those have no planets, but of course a lot have many more than our system does. So at least the same number in planets. There’s a good chance there’s more than one planet capable to supporting life among that number.

    In fact as we improve our ability to observe our galaxy we are able to verify more and more viable planets and even a reasonable number that are similar to our own planet.

    This means that there’s definitely going to be a reasonable chance that somewhere, life has evolved to similar or beyond our level already.

    But, this for sure doesn’t mean there’s any reason to expect visitors. That’s because even if they can travel at the speed of light, it’s still going to be thousands of years for the majority of them to reach us, provided they even choose to come to us. Because, from where they are they wouldn’t be able to make out our radio signals, nor likely any other signs of life. So we’d be one of many “potentially live bearing” planets.

    So, just my opinion. I think the chance of life being out there is reasonably high, the chance of actually being visited (assuming it holds true that we cannot travel faster than light) is probably very very low.


  • Sort of when it clicked for me, was when I realized that your code needs to be a tree of function calls.
    I mean, that’s what all code is anyways, with a main-function at the top calling other functions which call other functions. But OOP adds a layer to that, i.e. objects, and encourages to do all function calls between objects. You don’t want to do that in Rust. You kind of have to write simpler code for it to fall into place.

    Yes, this ties in with what I’m saying though. You need a paradigm shift in your design philosophy, which is hard when you come from a Cx background.

    I also think that in OO there shouldn’t be much cross contamination. It happens (and it happens a lot in my personal projects to be fair) but when well designed it shouldn’t need to be. In C# for example it should be the case that rather than a function owning a resource, a class should. So when using an object between classes you take it as a reference from a method in one class and pass it into a method to another class rather than call that class and make it a dependency of that class too. In this way you would have a one way dependency, rather than a two way.

    This kind of thinking has moved into creating objects in rust. Also I think yes within a same class the idea of a function (that isn’t static) accepting an object that is part of the class that was returned by another function in the case class feels very wrong from a Cx style point of view. If we knew we were going to do that, we’d just make it a class level variable and use it in both functions.

    Like I say, just another way of thinking and I’m not there yet.


  • The bingo one actually uses crossbeam channels instead of mutexes, so that’s nice. I haven’t looked too closely at it though.

    The C# original uses the equivalent of read/write locks. But I found it problematic to work the same way in rust and then discovered the communication option was far easier to implement and actually avoids holding up threads. So went with that. Much easier and much faster in execution I think.

    I don’t think you can do too much about the Spectrum one if you want to keep the two threads, but here’s what I would change related to thread synchronization. Lemmy doesn’t seem to allow me to attach patch files for whatever reason so have an archive instead… dblsaiko.net/pub/tmp/patches.tar.bz2 (I wrote a few notes in the commit messages)

    In reality I’m never likely to remake the CPU project in rust. Firstly because I’d need to entirely re-engineer it because it’s extensively using hierarchical classes, which just doesn’t work the same way in rust. And I’m not sure traits would allow me to do things in even close to the same way. But if it were to work with a CPU emulator they need to share the memory, and also the CPU needs its own thread.

    So basically it’s channels indexed by channel number and name? That one is actually one of the easy cases. Store indices instead:

    This was something I was thinking about the other evening. I needed to get the index to remove some other data anyway and wondered if I’d be better off having a master vector and usize lookups for that data store. It’s one extra lookup, but by index it’s the tiniest and the speed isn’t a real issue anyway. It’s replacing perl scripts pulling data from mysql. It couldn’t possibly run slower than that :P

    Thanks for the commentary though and I think I’m going to make the changes to use indices to lookup data. I wanted to re-order the way things are done a bit anyway. The problem I see potentially is that the lookups probably need to be regenerated every time I delete something. But actually I think that since it is rebuilt from a file on load. Maybe I just remove the items from the lookups and leave them in the vector. Since next run they would be gone anyway.



  • r00ty@kbin.lifetolinuxmemes@lemmy.worldSnap...
    link
    fedilink
    arrow-up
    1
    ·
    3 days ago

    I remember those times too. The difference today is that there are so many more libraries and projects use those libraries a lot more often.

    So using configure and make means that the user also has the responsibility of ensuring all those libraries are up to date. Which again if we’re talking about not using binary install, each also need a regular configure/make process too. It’s not that unusual for large packages to have dependencies on 100+ libraries. At which point building and maintaining the build for all of them yourself becomes untenable really. However I think gentoo exists to automate a lot of this while still building from source.

    I understand why binaries with references to other binary packages for prerequisites are used. I also understand where the limits of this are and why the AppImage/Flatpak/snaps exist. I just don’t particularly like the latter as a concept. But accept there’s times you might need them.


  • The current thing I’m working on (processor for iptv m3u files) isn’t public yet, it’s still in the very early stages. Some of the “learning to fly” rust projects I’ve done so far are here though:

    https://git.nerfed.net/r00ty/bingo_rust (it’s a multi-threaded bingo game simulator, that I made because of the stand-up maths video on the subject).
    https://git.nerfed.net/r00ty/spectrum_screen (this is a port of part of a general CPU emulation project I did in C#, it emulates the ZX spectrum screen, you can load in the 6912 byte screens and it will show it in a 2x scaled window).

    I think both of these are rather using Arc<RwLock<Thing>> because they both operate in a threaded environment. Bingo is wholly multi-threaded and the spectrum screen is meant to be used by a CPU emulator running in another thread. So not quite the same thing. But you can probably see a lot of jamming the wrong shape in the wrong hole in both of those.

    The current project isn’t multi-threaded. So it has a lot of the Rc/Rc<RefCell> action instead.

    EDIT: Just to give the reason for Rc<RefCell> in the current project. I’m reading in a M3U file and I’m going to be referencing it against an Excel file. So in the structure for the m3u file, I have two BtreeMaps, one for order by channel number and one by name. Each containing references to the same Channel object.

    Likewise the same channel objects are stored in the structure for the Excel file that is read in (searched for in the m3u file structure).

    BTreeMaps used because in different scenarios the contents will be output in either name order or channel order. So just better to put them in, in that order in the first place.


  • The problem with rust, I always find is that when you’re from the previous coding generation like myself. Where I grew up on 8 bit machines with basic and assembly language that you could actually use moving into OO languages… I find that with rust, I’m always trying to shove a round block in a square hole.

    When I look at other projects done originally in rust, I think they’re using a different design paradigm.

    Not to say, what I make doesn’t work and isn’t still fast and mostly efficient (mostly…). But one example is, because I’m used to working with references and shoving them in different storage. Everything ends up surrounded by Rc<xxx> or Rc<RefCell<xxx>> and accessed with blah.as_ptr().borrow().x etc.

    Nothing wrong with that, but the code (to me at least) feels messy in comparison to say C# which is where I do most of my day job work these days. But since I see often that things are done very different in rust projects I see online, I feel like to really get on with the language I need a design paradigm shift somewhere.

    I do still persist with rust because I think it’s way more portable than other languages. By that I mean it will make executable files for linux and windows with the same code that really only needs the standard libraries installed on the machine. So when I think of writing a project I want to work on multi platforms, I’m generally looking at rust first these days.

    I just realised this is programmerhumor. Sorry, not a very funny comment. Unless you’re a rust developer and laughing at my plight of trying to make rust work for me.




  • Specifically answering this question. It works transparently with IPv4. Organisations running servers can run both IPv4 and IPv6 operations with very little effort on their part. ISPs can deploy this and router makers include support with only a reasonable amount of effort.

    As users AND servers get IPv6 addresses, in the background they will just be used. At some point there would be so much IPv6 adoption they could turn off IPv4. There is a thing called “6to4” but dual stack has (I think rightly) became the main way people run both.

    In the UK I think at least half the ISPs provide IPv6 now. I think also in Europe it’s the same or better. But still we’re far from replacing IPv4 and I wonder when it might ever happen.


  • I’m going to just answer each point in turn. Maybe it’s useful. I don’t know.

    It offers a shitload of IP addresses

    It does. Generally most ISPs assign each user the equivalent of the IPv4 address space multiplied by itself. There’s a lot of address space to go around.

    They look really complicated

    This is true. But you rarely need to remember a full IP address. Most resources you access via DNS. If you have servers on your own network you will probably need to remember your own prefix (first 3 or 4 blocks of 4 hex numbers) and your servers you want to access would likely be ::1 and ::2 etc in that allocation. So you’d learn them. Also most routers allow for local DNS entries and there’s other things that will help here.

    Something about every device in your local network being visible from everywhere?

    This is a concern, but that’s mostly because router makers now are often badly configuring their routers. The correct way to configure a router is to allow outgoing/established connections by default and block all incoming (until you specifically open a port). Once this is done the security is very similar to NAT.

    Some claim it obsoletes NAT?

    Yes, NAT was created to make a small address space work in an era of multiple internet consumers behind a single connection. But when each device can get a routable IPv6 address, NAT is not needed. However the security I talk about above IS essential to apply to consumer routers.

    Now, I’ll elaborate on some of the features of IPv6 (a lot of which are just not being used when they could have been).

    IPv6 privacy extensions (RFC4951)

    This allows normal client machines (the kind that would usually be behind NAT entirely) to have a similar level of security and privacy provided by NAT. One concern with just plain IPv6 with a fixed IPv6 allocation is that people could ID a specific machine from web logs etc and could be used against you in privacy terms. This extension ensures that you have multiple active IPv6 addresses. One could be the one you perhaps have some ports open on. That address will not be used for outgoing connections. A random IP will be used for outgoing connections and this IP will not have any ports open and will change frequently. I think on windows this is enabled by default (when you look in ipconfig you will often see multiple “temporary addresses”).

    Harder to portscan

    Currently it doesn’t take THAT long to portscan the whole IPv4 address space. And because almost every public address is hosting multiple hosts behind it, there’s a good chance ports will be open on a lot of the IPs scanned.

    With IPv6 the public address space is huge. With normal machines having their allocations made randomly within a huge allocation per user and every IP would still need every port scanned. This makes active port scanning much harder. The above privacy extensions also mean that passive port scanning (port scanning IPs found in web logs for example) is harder too.

    User experience

    Provided consumer routers are configured well from the factory and ISPs are making sensible decisions regarding allocation of address space, the user will benefit from the advantages and not even know they’re using IPv6 in many cases. When you go to google/facebook/youtube etc you will be on IPv6 and not even know it.










  • The way I read it, the developer wanted opt-out but it’s likely it will be opt-in. I’m find with opt-in and vehemently against opt-out for telemetry.

    I would prefer the information was statistical only. Rather than hostname (making the assumption they only want hostname to be able to somehow separate the data to follow changes over time), a much better idea would be some kind of hash based on information unlikely to change, but enough information that it would be unlikely possible to brute-force the original data out of the hash. So all they know is, this data came from the same machine, but cannot ID the machine. Maybe some kind of unique but otherwise untrackable unique ID is created at install time and ONLY used for this purpose and no other.