this is Habryka talking about how his moderating skills are so powerful it takes lesswrong three fucking years to block a poster who’s actively being a drain on the site
here’s his reaction to sneerclub (specifically me - thanks Oliver!) calling LessOnline “wordy racist fest”:
A culture of loose status-focused social connection. Fellow sneerers are not trying to build anything together. They are not relying on each other for trade, coordination or anything else. They don’t need to develop protocols of communication that produce functional outcomes, they just need to have fun sneering together.
He gets us! He really gets us!
from the (extensive) footnotes:
Occupy Wallstreet strikes me as another instance of the same kind of popular sneer culture. Occupy Wallstreet had no coherent asks, no worldview that was driving their actions.
it’s so easy to LessWrong: just imagine that your ideological opponents have no worldview and aren’t trying to build anything, sprinkle in some bullshit pseudo-statistics, and you’re there!
Lesswrong and SSC: capable of extreme steelmanning of… check notes… occult mysticism (including divinatory magic), Zen-Buddhism based cults, people who think we should end democracy and have kings instead, Richard Lynn, Charles Murray, Chris Langan, techbros creating AI they think is literally going to cause mankind’s extinction…
Not capable of even a cursory glance into their statements, much less steelmanning: sneerclub, Occupy Wallstreet
Those examples are the Ingroup. We are the Outgroup.
It is gonna be worse, they can back up their statements by referring to people who were actually there, but they person they then would be referring to is Tim Pool, and you can’t as an first principles intellectual of the order of LessWrong, reveal that actually you get your information from disgraced yt’ers like all the other rightwing plebs. It has to remain an unspoken secret.
A small sidenote on a dynamic relevant to how I am thinking about policing in these cases:
A classical example of microeconomics-informed reasoning about criminal justice is the following snippet of logic.
If someone can gain in-expectation X dollars by committing some crime (which has negative externalities of Y>X dollars), with a probability p of getting caught, then in order to successfully prevent people from committing the crime you need to make the cost of receiving the punishment (Z) be greater than X/p, i.e. X<p∗Z.
Or in less mathy terms, the more likely it is that someone can get away with committing a crime, the harsher the punishment needs to be for that crime.
In this case, a core component of the pattern of plausible-deniable aggression that I think is present in much of Said’s writing is that it is very hard to catch someone doing it, and even harder to prosecute it successfully in the eyes of a skeptical audience. As such, in order to maintain a functional incentive landscape the punishment for being caught in passive or ambiguous aggression needs to be substantially larger than for e.g. direct aggression, as even though being straightforwardly aggressive has in some sense worse effects on culture and norms (though also less bad effects in some other ways), the probability of catching someone in ambiguous aggression is much lower.
Fucking hell, that is one of the stupidest most dangerous things I’ve ever heard. Guy solves crime by making the harshness of punishment proportional to the difficulty of passing judgement. What could go wrong?
Hmm, yes, I must develop a numerical function to determine whether or not somebody doesn’t like me…
One thing he gets is that direct aggression is definitely more effective in this situation. I can, and do, tell these people to fuck straight off, and my life is better for it!
“So, what are you in for?” “Making a right turn on a bicycle without signalling continuously for the last 100 feet before the turn in violation of California Vehicle Code 22108”
“… And litterin’.”
“…And creatin’ a nuisance”
@Amoeba_Girl @sneerclub isn’t this exactly the same “logic” that escalated the zizians to multiple murders?
Never raise an eyebrow without dropping the banhammer
that habryka dude sure loves the sound of his voice.
tbf being able to write thousand word long blog posts and using phrases like “good and important” is part of his job description
btw I read Said’s responses to his banning and if that dude ever shows up here he’s gone the second he’s spotted
They gave him a thread in which to complain about being banned… Are these people polyamorous just because they don’t know how to break up?
That it took this long to ban this guy and this many words is so delicious. What a failure of a community. What a failure in moderation.
Based on the words and analogies in that post: participating in LW must be like being in a circlejerk where everyone sucks at circlejerking. Guys like Said run around the circle yelling at them about how their technique sucks and that they should feel bad. Then they chase him out and continue to be bad at mutual jorkin.
E: That they don’t see the humor in sneering at “celebrating blogging” and that it’s supposedly us at our worst is very funny.
you can tell the real problem was I called them racist
You called them racist without proving from first principles it is bad to be racist, that they are racist, and their specific form of racism is also bad and will not lead to better outcomes in than being non-racist in the megafuture.
Hey if a tree is racist in the woods and two nerd blogs that pretend to be diametrically opposed on the political spectrum but are actually just both fascist don’t spend millions of words discussing it, is it really racist or should we assume more good faith
You live rent-free in so many big ol noggins.
All that acreage has to be adding up. Have you ever considered going into real estate?
in greggs?
Indeed, the LinkedIn attractor appears to be the memetically most successful way groups relate to their ingroup members, while the sneer attractor governs how they relate to their outgroups.
AND OLIVER COMES IN FROM THE TOP ROPE WITH THE HOTDOG COSTUME
Moderators need the authority to, at some level, police the vibe of your comments, even without a fully mechanical explanation of how that vibe arises from the specific words you chose.
hey everyone i am going to become top mod on this forum, now let me just reinvent human interaction from first principles
From the comments:
If Said returns, I’d like him to have something like a “you can only post things which Claude with this specific prompt says it expects to not cause <issues>” rule, and maybe a LLM would have the patience needed to show him some of the implications and consequences of how he presents himself.
And:
Couldn’t prediction markets solve this?
Ain’t enough lockers in the world, dammit
Lol I literally told these folks, something like 15 years ago, that paying to elevate a random nobody like Yudkowsky as the premier “ai risk” researcher, in so much that there is any AI risk, would only increase it.
Boy did I end up more right on that than my most extreme imagination. All the moron has accomplished in life was helping these guys raise cash due to all his hype about how powerful the AI would be.
The billionaires who listened are spending hundreds of billions of dollars - soon to be trillions, if not already - on trying to prove Yudkowsky right by having an AI kill everyone. They literally tout “our product might kill everyone, idk” to raise even more cash. The only saving grace is that it is dumb as fuck and will only make the world a slightly worse place.
some UN-associated ACM talk I was listening to recently had someone cite a number at (iirc)
$1.5tn total estimated investment$800b[0]. haven’t gotten to fact-check it but there’s a number of parts of that talk I wish to write up and make more knownone of the people in it made some entirely AGI-pilled comments, and it’s quite concerning
this talk; looks like video is finally up on youtube too (at the time I yanked it by pcap-ing a zoom playout session - turns out zoom recordings are hella aggressive about not being shared)
the question I asked was:
To Csaba (the current speaker): it seems that a lot of the current work you’re engaged in is done presuming that AGI is a certainty. what modelling you have you done without that presumption?
response is about here
[0] edited for correctness; forget where I saw the >$1.5t number
hearing him respond like that in real time and carefully avoiding the point makes clear the attraction of ChatGPT
Yeah a new form of apologism that I started seeing online is “this isn’t a bubble! Nobody expects an AGI, its just Sam Altman, it will all pay off nicely from 20 million software developers worldwide spending a few grand a year each”.
Which is next level idiotic, besides the numbers just not adding up. There’s only so much open source to plagiarize. It is a very niche activity! It’ll plateau and then a few months later tiny single GPU models catch up to this river boiling shit.
The answer to that has always been the singularity bullshit where the biggest models just keep staying ahead by such a large factor nobody uses the small ones.
Which is next level idiotic, besides the numbers just not adding up. There’s only so much open source to plagiarize.
but they can plagiarize all the code too that gets sent to them from software dev companies where employees use AI coding tools
We should be so lucky, the ensuing barrage of lawsuits about illegally cribbing company IP would probably make the book author class action damages pale in comparison.
but how would they figure out that it’s happening?
The billionaires who listened are spending hundreds of billions of dollars - soon to be trillions, if not already - on trying to prove Yudkowsky right by having an AI kill everyone. They literally tout “our product might kill everyone, idk” to raise even more cash. The only saving grace is that it is dumb as fuck and will only make the world a slightly worse place.
Given they’re going out of their way to cause as much damage as possible (throwing billions into the AI money pit, boiling oceans of water and generating tons of CO2, looting the commons through Biblical levels of plagiarism, and destroying the commons by flooding the zone with AI-generated shit), they’re arguably en route to proving Yud right in the dumbest way possible.
Not by creating a genuine AGI that turns malevolent and kills everyone, but in destroying the foundations of civilization and making the world damn-nigh uninhabitable.
Consider, however, the importance of building the omnicidal AI God before the Chinese.
lobste.rs just banned an 11 year old account with almost 5,000 comments and 45k karma for being a transphobic jerk. No muss, no fuss, no apologetic blogpost where the user could defend themselves and rile up the masses.
That’s how you do it, people.
edit I have now skimmed the comments where banned use Said can explain himself, and he’s using his last efforts to nobly defend himself, thanking his admirers, and generally projecting an image of a man wrongly accused.
j/k he’s doubling down on being a dick.
oh, i’m laughing now. it’s actually beautiful that it was the anubis’ anime jackal girl that forced him to drop the plausible deniability shield and go full queerphobic.
I finally found which user we’re talking about and I am quietly delighted at that smarmy fucker being directed to the fourth-floor egress.
edit: here’s the long form mod last warning. You’ll see in that thread my next prediction for ejection via the fourth floor, whose profile shows he’s into crypto.
yeah sorry I had the username (“friendlysock”) in a first draft then forgot to add it
good riddance to bad rubbish
…wby does that username ring bells in my brain
did it show up here somewhat recently? (I ask right before checking search)
(e: nothing immediately in search but I could swear I’ve seen that name somewhere in the last few months (and not in a good context))
smarmy not as cryptic as he thought right-winger on lobsters who didn’t quite hide his power level
shit you’re right, I should search offsite (active) chats too
(like, largely it just bugs me where I know the name from (because being baseline horrendous at recalling names and then recognising this one is uhhhh))
they finally got that asshole? took ’em long enough
An update, a Concerned Citizen furrows their brow and Has Questions on how this was handled, about a week later
https://lobste.rs/s/zoirhl/appealing_ban_user_friendlysock
A predictable shitshow erupts, which leads to the creator of Anubis quitting the site, along with a few others.
At least I get more entries in my “lobste.rs assholes” list.
Worth noting is this text from the mod (https://lobste.rs/s/zoirhl/appealing_ban_user_friendlysock#c_6prapm)
This pattern was that every couple years there’d be a long string of cruel, dismissive, or discouraging comments. Not necessarily wrong, not outright abusive, not spam, not any one over the line, but all of them… just endless “ugh, this guy again?” and other users, more or less quietly, leaving rather than deal with it. This final comment called out in the modlog is one more example. The entire goal of the rhetoric is that it’s not explicit, it’s not outright libelous, it doesn’t even take personal responsibility for holding the opinion of the smear it insinuates. It’s deniable out of context, whether the missing context is the author’s writing history or the missing context is the old bigotry that trans people are pedophiles.
I’m not missing that context. The only thing I’m missing is the charity to think that somehow this time it’s different, that after a decade, this time it will the last time would be the last time I have to explain that this is a community where everybody’s going to be treated with some basic dignity. The other mods suggested the idea of a public apology but that didn’t work out, and absent a persuasive reason to expect this pattern wouldn’t continue, this ban stands.
j/k he’s doubling down on being a dick.
I had kind of gotten my hopes up from the comparisons of him to sneerclub that maybe he’d be funny or incisively cutting or something, but it looks mostly like typical lesswrong pedantry, just less awkwardly straining to be charitable (to the in-group).
Funniest are all the commenters loudly complaining about this decision and threatening/promising to delete their accounts.
Nice, I petittion for this to be the new description of SneerClub just like that magnificent Yud quote was on Reddit
deleted by creator
“They don’t need to develop protocols of communication that facilitate buying castles, fluffing our corporate overlords, or recruiting math pets. They share vegan recipes without even trying to build a murder cult.”
Here’s my recipe for blueberry bread that I make for parties and such.
And here’s my experimental recipe for yu hsiang eggplant (still in progress; this was my most recent attempt).
mary chung’s mentioned! i miss mary chung’s
blueberry with 3 Rs bread
Come to the Sneer Attractor, we have brownies
Here’s a vegan gumbo I made for Thanksgiving a couple years back.
I’ve never tried a Pyrex roux before. I’ll have to give that a shot. Often, I use our Pyrexen to rehydrate textured vegetable protein. Scoop a couple cups from the giant box in the pantry, add a couple teaspoons of stock concentrate (e.g., the Better Than Bouillon veggie and roasted garlic flavors), add water until the granules start floating, stir, microwave 30 seconds, stir, microwave another 30 seconds. Then it’s ready for skillet-frying with whatever spices and other flavorings seem appropriate in the moment. Chili powder, red pepper flakes, cumin, oregano and a dash of cocoa powder makes for a good Tex-Mex flavor profile that can sub for ground beef in tacos, enchiladas, etc. Soy sauce, mirin and sugar or agave is a straightforward teriyaki. It’s pretty versatile stuff.
The Totole “Granulated Chicken Flavor Soup Base Mix” is another good flavor boost.
lol
I, the man from the internet who called Peter Thiel a racist hotdog, am the one with real power.
You might need to update that to “racist wax hotdog” judging from his appearance lately.
It is very important we do not congratulate you over this, or we will become linkedin!