• Eranziel@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 day ago

    Part of this is a debate on what the definition of intelligence and/or consciousness is, which I am not qualified to discuss. (I say “discuss” instead of “answer” because there is not an agreed upon answer to either of those.)

    That said, one of the main purposes of AGI would be able to learn novel subject matter, and to come up with solutions to novel problems. No machine learning tool we have created so far is capable of that, on a fundamental level. They require humans to frame their training data by defining what the success criteria is, or they spit out the statistically likely human-like response based on all of the human-generated content they’ve consumed.

    In short, they cannot understand a concept that humans haven’t yet understood, and can only echo solutions that humans have already tried.

    • qt0x40490FDB@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      7
      ·
      edit-2
      1 day ago

      I don’t see why AGI must be conscious, and the fact that you even bring it up makes me think you haven’t thought too hard about any of this.

      When you say “novel answers” what is it you mean? The questions on the IMO have never been asked to any human before the Math Olympiad, and almost all humans cannot answer those quesion.

      Why does answering those questions not count as novel? What is a question whose answer you would count as novel, and which you yourself could answer? Presuming that you count yourself as intelligent.

      • gandalf_der_12te@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        edit-2
        1 day ago

        What is a question whose answer you would count as novel, and which you yourself could answer?

        AI does not have genetics and therefore no instincts that was shaped by billions of years of evolution,

        so when presented with a challenge that doesn’t appear in its training data, such as whether to love your neighbor or not, it might not be able to answer because that exact scenario doesn’t appear in its training data.

        humans can answer it instinctively because we have billions of years of experience behind us backing us up and providing us with a solid long-term positive decision-making capability.