

That graph is hilarious. Enormous error bars, totally arbitrary quantization of complexity, and it’s title? “Task time for a human that an AI model completes with a 50 percent success rate”. 50 percent success is useless, lmao.
On a more sober note, I’m very disappointed that IEEE is publishing this kind of trash.
Part of this is a debate on what the definition of intelligence and/or consciousness is, which I am not qualified to discuss. (I say “discuss” instead of “answer” because there is not an agreed upon answer to either of those.)
That said, one of the main purposes of AGI would be able to learn novel subject matter, and to come up with solutions to novel problems. No machine learning tool we have created so far is capable of that, on a fundamental level. They require humans to frame their training data by defining what the success criteria is, or they spit out the statistically likely human-like response based on all of the human-generated content they’ve consumed.
In short, they cannot understand a concept that humans haven’t yet understood, and can only echo solutions that humans have already tried.