![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.nz/pictrs/image/2fd3e262-be28-41a6-9bb5-06c0eb8b3f9d.png)
Had a read of Dave Jaques LinkedIn - and colourful would be doing a lot of heavy lifting in summarising their self description.
Had a read of Dave Jaques LinkedIn - and colourful would be doing a lot of heavy lifting in summarising their self description.
Pretty shocking framing there; almost worth a complaint as its quite a distance from what the truth is.
Just to add to that, im not much of an economist or anything but my understanding is there’s 2 main levers that can be used - interest rates (monetary policy?) and tax rates (fiscal policy).
Its become orthodox to use the former & ignore the latter, partly because of voter backlash & it can be a bit complicated. But as far as I understand it given sovereign governments can print money & borrow when things are bad to generate economic activity the flipside would be to tax it back out and save it to reduce the supply of money chasing goods.
Some folks argue that would be a tidier way of doing things, who knows?!
https://www.slowboring.com/p/tax-increases-are-the-best-cure-for https://www.corporateknights.com/category-finance/seven-ways-to-tackle-inflation-without-raising-interest-rates/
If you go read up on the history of central banks using interest rate hikes to generate recessions to tame inflation its pretty damn consistent that they often end up overshooting and making things worse than they needed to be to achieve the same result.
There’s a follow up article on RNZ I think talking about this a bit more. One of the ideas someone had is that it could be great in health because they could use AI chatbots to talk to patients in their own language.
Which would be a great service for sure; but like, translation tools already exist and are likely to be better than anything branded as “AI” comes up with for a long time and there’s always like translation services with humans we could just pay to do it without burning the planet.
That’s what excel is; code for people who don’t know they’re writing code - and its clearly a bad way of doing most of the things people do with it.
But on the flipside you have to give it props for getting people a foot into programming, even if they don’t realise that’s what they’re doing (and folks who use actual languages and lines of text to achieve the same thing don’t accept it for what it kinda is).
I think you could make an argument that Excel is the world’s most used/successful IDE ;)
I’d have to see that in action before I pass judgement but given LLMs predilection for hallucination and the vagaries of how humans report tech faults I would be surprised if it was significantly more accurate or effective than a human. After all if its working out if there’s a known issue then essentially its not much beyond a script at that point and in that case do you want to trade the unpredictability of what an LLM might recommend vs something (human or otherwise) that will follow the script?
Even if an LLM were an effective level 0 helpdesk it would still need to overcome the user’s cultural expectation (in many places) that they can pick up the phone and speak to somebody about their problem. Having done that job a long long time ago, diagnosing tech problems for people who don’t understand tech can be a fairly complex process. You have to work through their lack of understanding, lack of technical language. You sometimes have to pick up on cues in their hesitations, frustrated tone of voice etc.
I’m sure an LLM could synthesis that experience 80% of the time, but depending on the tech you’re dealing with you could be missing some pretty major stuff in the 20%, especially if an LLM gives bad instructions, or closes without raising it etc. So you then need to pay someone to monitor the LLM and watch what its doing - at which point you’ve hired your level 1 tech again anyway.
The more I see & hear, the more I think its all grift.
Ie the crypto bros left their coins for nfts, and now they’ve tanked they’re finding something else to burn the planet down in order to scam suckers.
“AI” for health is already known to be very problematic, and nobody wants to see a 10 years down the track commission of inquiry about why some women were not diagnosed correctly from their mammograms.
The chatbots Judith is talking about for tutoring children regularly hallucinate and come up with such stupid things as cooking recipes for petrol spaghetti and other reckless trash. Sounds like a fast way to destroy the education of a bunch of children, but all the rich kids will still be in their private schools with low pupil numbers and enjoying private tutors so why would Judith care.
This sort of thing is pretty consistent with the anecdotal rumours about Luxon’s interactions with pilots and other staff while heading up Air NZ. Which sorta pushes those stories from the probably just made up, to actually maybe they are legit after all. Some of the scuttlebutt stories basically alleged he was a total a-hole; which, yeah I can see it.
I mean a souped up compensator Ford Ranger aint cheap is it!
Exactly; the statement will have been reviewed by lawyers who would have suggested rewording it.
I just want to jump in here as the whole thing about the tonnes of factual errors stuff…
A lot of the allegations about the accuracy of their data basically came down to arguments about the validity of statistics garnered from testing methodology; and how Labs guy claimed their methods were super good, vs other content creators claiming their methods were better.
My opinion is that all of these benchmarking content creators who base their content on rigorous “testing” are full of their own hot air.
None of them are doing sampling and testing in volume enough to be able to point to any given number and say that it is the metric for a given model of hardware. So the value reduces to this particular device performed better or worse than these other devices at this point in time doing a comparable test on our specific hardware, with our specific software installation, using the electricity supply we have at the ambient temperatures we tested at.
Its marginally useful for a product buying general comparison - in my opinion to only a limited degree; because they just aren’t testing in enough volume to get past the lottery of tolerances this gear is released under. Anyone claiming that its the performance number to expect is just full of it. Benchmarking presents like it has scientific objectivity but there are way too many variables between any given test run that none of these folks isolate before putting their videos up.
Should LTT have been better at not putting up numbers they could have known were wrong? Sure! Should they have corrected sooner & clearer when they knew they were wrong? Absolutely! Does anybody have a perfect testing methodology that produces reliable metrics - ahhhh, im not so sure. Was it a really bitchy beat up at the time from someone with an axe to grind? In my opinion, hell yes.
This! Would you hire a mechanic who you know does what you say but 1/3 cars they “repair” ends up breaking again 6 months later?
As noted above, I think the statement is that allegations were made and they were not ignored, and/or were addressed.
This is silly.
If it was reported, it was reported. The whole point of Roper Greyell being involved is to identify if there were events that weren’t reported, or were reported and not acted upon.
That’s not the only way to read that at all. Your interpretation is that sexual harrassment was not ignored & was addressed; but the sentence is actually that allegations were not ignored and were addressed.
Nobody trying to make money on YouTube is going to stop click bait; its a necessary evil to get your videos fed by the algorithm. It sucks, but its here to stay until the algorithm starts punishing it.
I don’t use LinkedIn at all, but that profile confirmed my suspicions about it.