• 0 Posts
  • 4 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle
  • Part of this is a debate on what the definition of intelligence and/or consciousness is, which I am not qualified to discuss. (I say “discuss” instead of “answer” because there is not an agreed upon answer to either of those.)

    That said, one of the main purposes of AGI would be able to learn novel subject matter, and to come up with solutions to novel problems. No machine learning tool we have created so far is capable of that, on a fundamental level. They require humans to frame their training data by defining what the success criteria is, or they spit out the statistically likely human-like response based on all of the human-generated content they’ve consumed.

    In short, they cannot understand a concept that humans haven’t yet understood, and can only echo solutions that humans have already tried.




  • Yes, you’re anthropomorphizing far too much. An LLM can’t understand, or recall (in the common sense of the word, i.e. have a memory), and is not aware.

    Those are all things that intelligent, thinking things do. LLMs are none of that. They are a giant black box of math that predicts text. It doesn’t even understand what a word is, orthe meaning of anything it vomits out. All it knows is what is the statistically most likely text to come next, with a little randomization to add “creativity”.