It's nice to see someone else that recognizes that the Emperor is naked. I find it somewhat disturbing how many people seem to fall for the Eliza-effect illusion that LLMs are thinking.
> LLMs operate in the plane of words, not in the world of physical phenomena that science investigates. They don’t reason, synthesize evidence, or draw upon the previous literature. They can generate text that looks like a paper but mistaking this for science is a cargo-cult fallacy.
> One computer scientist speculated that his LLM had attained sentience.
> How did he reach that conclusion? Basically, he asked “Are you conscious?”, the machine responded “Yes”, and that was that.
Oh, come on now. This is referring to Blake Lemoine, and while I doubt his conclusions, he wasn't being as simplistic as all that. He's not completely stupid.
It's nice to see someone else that recognizes that the Emperor is naked. I find it somewhat disturbing how many people seem to fall for the Eliza-effect illusion that LLMs are thinking.
horrible design on the website please just give me a block of text to read
What an obnoxious website. I'm not clicking through your silly javascript animated slideshow one sentence at a time just to read an article.
That's surely a professional, non-sensationalist, title that's appropriate for university professors.
Was this written in 2022?
Looks like early 2025. Anything in it in particular that you see as comically out of date?
This website is not worth your time.
> LLMs operate in the plane of words, not in the world of physical phenomena that science investigates. They don’t reason, synthesize evidence, or draw upon the previous literature. They can generate text that looks like a paper but mistaking this for science is a cargo-cult fallacy.
This is clearly wrong
From the same authors of "Calling Bullshit: The Art of Skepticism in a Data-Driven World"[0], Carl Bergstrom and Jevin West.
[0]https://callingbullshit.org/
> One computer scientist speculated that his LLM had attained sentience.
> How did he reach that conclusion? Basically, he asked “Are you conscious?”, the machine responded “Yes”, and that was that.
Oh, come on now. This is referring to Blake Lemoine, and while I doubt his conclusions, he wasn't being as simplistic as all that. He's not completely stupid.
A 10,000 word website about bullshit machines.
See what they did there ?
Clearly just an anti AI rant packaged up in a fancy suit.