From the Internet - identity untraceable.
- The author is unidentifiable.
- Clicking the source link later reveals "Sorry, this post was deleted by the person who originally posted it."
- This website author believes this was available before it was deleted.
- Gemini Google reveals that:
The quote you are looking for comes from a Reddit discussion in the r/Futurology community. It was posted by a user named Caelinus in late May or early June 2024 (appearing in a thread titled "The Existential Risk of AI: A Real Threat, Not Sensationalism").
The Full Quote
"If you bring up any limitations or flaws in the technology, despite them being objective and obvious, people just claim you don't understand it and that their sci-fi interpretation of it is the real 'understanding it.' It is at the point where people legitimately have started to think that LLMs are actually conscious."
Context of the Remark
The author was participating in a debate regarding the nature of Large Language Models (LLMs) and Artificial Intelligence. The core of their argument was:
-
The "Black Box" Fallacy: People often use the complexity of AI as a "blank check" to project human-like qualities (like consciousness or intent) onto it.
-
Objective Flaws vs. Subjective Hype: The user was expressing frustration that technical critiques of how these models function—specifically that they are statistical word predictors—are often dismissed by enthusiasts as "not getting the big picture."
-
Sci-Fi Influence: The quote highlights a modern phenomenon where public perception of technology is shaped more by tropes from movies and novels than by the actual underlying engineering.
-
-
"I unknown get downvoted heavily for "not understanding how it works."
- The most recent example was a bunch of people taking issue with me pointing out that LLMs store data in memory, because they apparently did not store data, just "statistical relationships between how words are connected."
- Which is, of course, data stored in memory. The means by which it is doing so is novel, but the machine could not reproduce information is that information was not stored. Storing in a Neural Net [see AI Overview] is still storing it, even if it does not look like traditional data storage.
- If you bring up any limitations or flaws in the technology, despite them being objective and obvious, people just claim you don't understand it and that their sci-fi interpretation of it is the real "understanding it."
- It is at the point where people legitimately have started to think that LLMs are actually conscious, and their evidence for that assertion is that they cannot prove humans are conscious.
- Which is technically (!) untrue, because every person can technically prove that at least one human is conscious, but also is just an admission that there is no real evidence that the machines are conscious, as they have to fall back on trying to get me to prove a negative.
- [Is 'prove a negative' philosophy gone mad? [ more] Contact link at foot of page.
- So everyone is getting all terrified of super intelligent AGI more* based entirely on a hype marketing campaign by a bunch of companies that are trying to replace workers with machines, instead of getting terrified of what it means that companies are trying to replace workers with machines. This contribution had been taken down. This page carries similar results. The elehman839 contribution is well worth reading.
- * AGI, or Artificial General Intelligence, refers to a hypothetical intelligence capable of performing any intellectual task that a human can. more