![](https://programming.dev/pictrs/image/8564c4f4-21ff-43ab-996b-13d1a897d922.png)
![](https://mander.xyz/pictrs/image/dbeda0de-d3fb-4fab-8703-3e52e72cb4db.jpeg)
ChatGPT uses auxiliary models to perform certain tasks like basic math and programming. Your explanation about plausibility is simply wrong.
ChatGPT uses auxiliary models to perform certain tasks like basic math and programming. Your explanation about plausibility is simply wrong.
If you fine tune a LLM on math equations, odds are it won’t actually learn how to reliably solve novel problems. Just the same as it won’t become a subject matter expert on any topic, but it’s a lot harder to write simple math that “looks, but is not, correct” than it is to waffle vaguely about a topic. The idea of a LLM creating a robust model of the semantics of the text it’s trained on is, at face value, plausible; it just doesn’t seem to actually happen in practice.
I’m not a physicist, I don’t know one way or another. But it’s possible that there’s a leading explanation for the formation of the universe based on a mathematical model that predicts exactly one big bang.
Based on the comment you’re replying to, I assume they would say “no, nothing materialized from nothing because there wasn’t a ‘before’ in which nothing could have existed”
It wouldn’t have been published, and he’s only relatively famous if you’re a topologist, but it was Charlie Frohman. Not that it must carry the same weight for you, but I value his insight highly, even if it’s just a quip.
Yes, but it proves that termwise comparison with the harmonic series isn’t sufficient to tell if a series diverges.
The assumption is that the size decreases geometrically, which is reasonable for this kind of self similarity. You can’t just say “less than harmonic” though, I mean 1/(2n) is “slower”.
Quoting a relatively famous mathematician, linear algebra is one of the few branches of math we’ve really truly understood. It’s very, very well behaved
Lived in Iowa for a few years, there were a few authentic Mexican places, just not as many as Americanized ones.
Yes, with Iosevka font
Google it? Axiomatic definition, dedekind cuts, cauchy sequences are the 3 typical ones and are provably equivalent.
I’m fully aware of the definitions. I didn’t say the definition of irrationals was wrong. I said the definition of the reals is wrong. The statement about quantum mechanics is so vague as to be meaningless.
That is not a definition of the real numbers, quantum physics says no such thing, and even if it did the conclusion is wrong
Stokes’ theorem. Almost the same thing as the high school one. It generalizes the fundamental theorem of calculus to arbitrary smooth manifolds. In the case that M is the interval [a, x] and ω is the differential 1-form f(t)dt on M, one has dω = f’(t)dt and ∂M is the oriented tuple {+x, -a}. Integrating f(t)dt over a finite set of oriented points is the same as evaluating at each point and summing, with negatively-oriented points getting a negative sign. Then Stokes’ theorem as written says that f(x) - f(a) = integral from a to x of f’(t) dt.
Going to almost certainly be less than 1. Moving further up the food chain results in energy losses. Those fish are going to use energy for their own body and such
For sure, which is why I said “another food source would be needed.” I had in mind something like the wild-caught fish being processed into something useful as part of a more efficient food chain, e.g. combined with efficiently-farmed plant material.
Moreover there’s high mortality rates inside of fish farms for fish themselves.
I don’t have any context on the other pros and cons of fish farming, so definitely not arguing whether they’re a net positive or not.
What’s the ROI? If 15% of wild caught fish are used to support fish farms that produce twice as much, it’s not as obviously a bad thing. There’d need to be another food source though.
Only if you’re trying to get a numerical point evaluation. For example, one can use Fourier series to represent complex signals in terms of sine waves, and then reproduce the sine waves with hardware to reproduce the original signal. This is how a simple synthesizer produces different kinds of tones.
We aren’t trying to establish that neurons are conscious. The thought experiment presupposes that there is a consciousness, something capable of understanding, in the room. But there is no understanding because of the circumstances of the room. This demonstrates that the appearance of understanding cannot confirm the presence of understanding. The thought experiment can’t be formulated without a prior concept of what it means for a human consciousness to understand something, so I’m not sure it makes sense to say a human mind “is a Chinese room.” Anyway, the fact that a human mind can understand anything is established by completely different lines of thought.
Shoutout to my fort myers and cape coral homies
That’s not what I meant.
??? You just don’t understand the difference between a LLM and a chat application using many different tools.