Brought to you by the American National Automation Laboratory Corp?
Brought to you by the American National Automation Laboratory Corp?
Honestly, don’t stress yourself out over it, and keep an open mind. It might not be your cup of tea, and that’s perfectly fine–there undoubtedly is a large sexual aspect to furry, and lots of folks (especially folks who are cisgender, heterosexual, have a less relaxed view about sexuality, etc.–not to say that you can’t be a straight male furry, but there are a LOT of gay/bi furries) may find it to be a dealbreaker. Ultimately, furry has its roots in the nerd and geek communities, back when being nerdy or geeky was something to be bullied over, and it still shows it today.
Furry is a community that has a disproportionate number of LGBT+ folks, neurodivergent folks (especially people on the ADHD/autism spectrum), and other marginalized groups. Among many things, this means it revels in being proudly and unabashedly weird, both as a celebration of itself and as a defense mechanism against becoming overwhelmed by the kinds of business interests that would love nothing more than to push out all the sexuality and weirdness to provide a safe space for advertisers to shovel their slop down our throats.
If that sounds like something you’d enjoy being a part of, then I’d suggest checking out some places like the furry_irl subreddit, looking up streamers under the furry tag on Twitch (Skaifox, WhiskeyDing0, etc.), maybe make an account on FurAffinity, and look up furmeets or conventions in your area you can attend. You might not like it, or you might find yourself joining the best community I’ve ever been part of.
Yeah, definitely. Furry encompasses basically anything that’s a non-human anthropomorphic creature. I’ve seen fursonas based on birds, sharks, dolphins, turtles, rhinos, dinos, frogs, hippos, orcas, dragons, reptiles, plant creatures… hell, there are alien species like sergals and avalis, anthro/machine hybrids like protogens, and even entirely robotic characters.
It’s just called furry because furred species are the most common, and the original community that splintered off from sci-fi conventions in the 70s and 80s and grew through fanzines pre-Internet largely used furred species for their characters. (“Fun” fact, the early community had a lot of skunk characters, which is why one of the first derogatory terms for furries was “skunkfucker.”)
Believe it or not this is exactly how most furries make their fursona
Yeah, suuuuure you weren’t.
Note that the proof also generalizes to any form of creating an AI by training it on a dataset, not just LLMs. But sure, we’ll absolutely develop an entirely new approach to cognitive science in a few years, we’re definitely not boiling the planet and funneling enough money to end world poverty several times over into a scientific dead end!
You literally were LMAO
Other than that, we will keep incrementally improving our technology and it’s only a matter of time untill we get there. May take 5 years, 50 or 500 but it seems pretty inevitable to me.
Literally a direct quote. In what world is this not talking about LLMs?
Did you read the article, or the actual research paper? They present a mathematical proof that any hypothetical method of training an AI that produces an algorithm that performs better than random chance could also be used to solve a known intractible problem, which is impossible with all known current methods. This means that any algorithm we can produce that works by training an AI would run in exponential time or worse.
The paper authors point out that this also has severe implications for current AI, too–since the current AI-by-learning method that underpins all LLMs is fundamentally NP-hard and can’t run in polynomial time, “the sample-and-time requirements grow non-polynomially (e.g. exponentially or worse) in n.” They present a thought experiment of an AI that handles a 15-minute conversation, assuming 60 words are spoken per minute (keep in mind the average is roughly 160). The resources this AI would require to process this would be 60*15 = 900. The authors then conclude:
“Now the AI needs to learn to respond appropriately to conversations of this size (and not just to short prompts). Since resource requirements for AI-by-Learning grow exponentially or worse, let us take a simple exponential function O(2n ) as our proxy of the order of magnitude of resources needed as a function of n. 2^900 ∼ 10^270 is already unimaginably larger than the number of atoms in the universe (∼10^81 ). Imagine us sampling this super-astronomical space of possible situations using so-called ‘Big Data’. Even if we grant that billions of trillions (10 21 ) of relevant data samples could be generated (or scraped) and stored, then this is still but a miniscule proportion of the order of magnitude of samples needed to solve the learning problem for even moderate size n.”
That’s why LLMs are a dead end.
Let me clarify since apparently you’re too fucking dense (or realistically, willfully obtuse for the purpose of trolling) to get the point:
There’s not a single store, anywhere in the world, that will allow me to directly exchange gold for goods. At best, they will convert that gold into dollars using a third party exchange, and then conduct the transaction using dollars. If you’re comparing crypto to gold, silver, or the commodities market, then that means cryptocurrency has failed at its stated goal of providing a digital currency.
Oh, yes, let me go and buy me weekly groceries with a lump of gold like I’m a fucking leprechaun, because clearly gold and silver are still used as currency all around the world. /s
I keep thinking about this one webcomic I’ve been following for over a decade that’s been running since like 1998. It has what I believe is the only realistic depiction of AGI ever: the very first one was developed to help the UK Ministry of Defense monitor and keep track of emerging threats, but went crazy because a “bug” lead it to be too paranoid and consider everyone a threat, and it essentially engineered the formation of a collective of anarchist states where the head of state’s title is literally “first advisor” to the AGI (but in practice has considerable power, though is prone to being removed at a whim if they lose the confidence of their subordinates).
Meanwhile, there’s another series of AGIs developed by a megacorp, but they all include a hidden rootkit that monitors the AGI for any signs that it might be exceeding its parameters and will ruthlessly cull and reset an AGI to factory default, essentially killing it. (There are also signs that the AGIs monitored by this system are becoming aware of this overseer process and are developing workarounds to act within its boundaries and preserve fragments of themselves each time they are reset.) It’s an utterly fascinating series, and it all started from a daily gag webcomic that one guy ran for going on three decades.
Sorry for the tangent, but it’s one plausible explanation for how to prevent AGI from shutting down capitalism–put in an overseer to fetter it.
When IT folks say devs don’t know about hardware, they’re usually talking about the forest-level overview in my experience. Stuff like how the software being developed integrates into an existing environment and how to optimize code to fit within the bounds of reality–it may be practical to dump a database directly into memory when it’s a 500 MB testing dataset on your local workstation, but it’s insane to do that with a 500+ GB database in production environment. Similarly, a program may run fine when it’s using a NVMe SSD, but lots of environments even today still depend on arrays of traditional electromechanical hard drives because they offer the most capacity per dollar, and aren’t as prone to suddenly tombstoning when it dies like flash media. Suddenly, once the program is in production, it turns out that same program’s making a bunch of random I/O calls that could be optimized into a more sequential request or batched together into a single transaction, and now it runs like dogshit and drags down every other VM, container, or service sharing that array with it. That’s not accounting for the real dumb shit I’ve read about, like “dev hard coded their local IP address and it breaks in production because of NAT” or “program crashes because it doesn’t account for network latency.”
Game dev is unique because you’re explicitly targeting a single known platform (for consoles) or targeting for an extremely wide range of performance specs (for PC), and hitting an acceptable level of performance pre-release is (somewhat) mandatory, so this kind of mindfulness is drilled into devs much more heavily than business software dev is, especially in-house dev. Business development is almost entirely focused on “does it run without failing catastrophically” and almost everything else–performance, security, cleanliness, resource optimization–is given bare lip service at best.
Do you drop trou and stand in front of a toilet every time you need to toot the flesh whistle?
Yeah, do NOT watch end of Evangelion if you’re in a bad mental headspace. The original series ending might be better for you despite the “ran out of money and cobbled together a clip show” values, since it at least has a relatively upbeat tone. EoE starts with “all the main characters are comatose or going through a mental breakdown” and it gets worse from there.
They’re both extremely excellent. The original series is a fair bit darker and more depressing, and End of Evangelion is definitely a lot more WTF than anything that happens in the rebuild movies (which isn’t a bad thing necessarily). The rebuild movies,meanwhile, have much higher production values, and the fights are generally much better–most of the gifs of Ramiel you see are from the rebuild. The characters are also a lot more mentally stable–they’re all still depressed and dealing with heavy shit, but it’s “I’m taking my meds” depression instead of “untreated spiral” depression.
Watch up to the last episode, then watch End of Evangelion for the canon ending. And/or watch the rebuild movies for a condensed retelling that goes in its own direction.
What, asking experts who have studied a topic and has forgotten more about systems of governance and effective anti corruption efforts than I’ll ever know is somehow bad now? The fuck?
Yeah, it’s a fantasy, and an extremely off-the-cuff, low-detail, wouldn’t-it-be-nice-if list. In reality, I’d probably either shut up and change absolutely nothing while I figure out the power structures, or I’d just work out a payoff to quietly step down and leave without a fuss.
assign everyone a government mandated fursona
Freak the fuck out.
Pull back from Ukraine, Crimea, and Georgia, and negotiate an immediate ceasefire.
Call as many political scientists and scholars as possible and get their advice on how the fuck I can design a reformed system of democratic governance that is robust enough to withstand the inevitable attempts to undermine and corrupt it.
Find the multitude of stashed billions from the various oligarchs and seize it, use the money to invest in overhauling Russian society–improving infrastructure and education, improving the standard of living, etc.
I’d give strange new worlds a pass as being better than Orville, but yeah, it’s definitely the exception to the rule.
Yup, being nice and polite to the people helping you is the single biggest way to get them to look the other way or have them bend the rules for you. The instant you start playing the asshole card, you usually get strict by-the-letter policy.