Hot on the heels of GPT-4’s release, Microsoft researchers have released a hyped-up research paper claiming that the OpenAI language model shows “sparks” of human-like intelligence, or artificial general intelligence (AGI), if such a thing exists.
The OpenAI language model, which powers Microsoft’s unstable and mentally obtuse Bing AI is “only a first step towards a series of increasingly generally intelligent systems”, said researchers. At no point do they actually say or believe that the AI has fully fleshed out, human-level cognition. They also highlighted the fact that the paper is built on an “early version” of GPT-4, which researchers examined while it was “still in active development by OpenAI.”
With the disclaimers out of the way, it’s clear that these claims are serious and could have major implications if true. While most folks out there think of AGI as a pipe dream – an idea which as Peter Thiel puts it “could mean anything, and therefore nothing”, others believe that developing AGI will usher in the next era of human evolution. The next-gen GPT-4 is the latest iteration of the OpenAI-built Large Language Model (LLM) so far. Theoretically, GPT-4 is somewhere at the top of the list of contenders for its potential to crest the vaunted general-intelligence event horizon.
In the paper, researchers wrote: “we contend that (this early version of) GPT-4 is part of a new cohort of LLMs (along with ChatGPT and Google’s PaLM for example) that exhibit more general intelligence than previous AI models.”
They argue that GPT-4 is stronger than other OpenAI models which have come before in a generalised sense. It’s one thing to build the model that can perform in an exam, but it’s another thing entirely to build a device that can perform many tasks without specific training. The latter case is where GPT-4 shines, said the researchers.
“We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology, and more, without needing any special prompting,” reads the paper. “Moreover, in all of these tasks, GPT-4’s performance is strikingly close to the human-level performance, and often vastly surpasses prior models such as ChatGPT.”
They went on to say: “Given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.”
In this respect, researchers may have a point! Like other LLMs, GPT-4 is certainly flawed and makes various confabulations or ‘hallucinations’, and struggles with math. But despite these slipups, the model stands out in relation to previous iterations. For example, GPT-4 is excellent at passing tests such as the LSAT, and even the Certified Sommelier theory test with flying colours without prior training.
By contrast, GPT-3.5 scored at the bottom 10% of all Bar exam takers, while GPT-4 scored in the 90% range. That’s a major step from one model to the next, which was just released a couple months ago.
Elsewhere, researchers say that their research saw the bot “overcome some fundamental obstacles such as acquiring many non-linguistic capabilities,” while also making “great progress on common-sense”. Notably, ‘common-sense’ was an area which previous models struggled with.
There are many more caveats to the AGI argument, however, with researchers admitting that the model is “at or beyond human-level for many tasks,” its overall “patterns of intelligence are decidedly not human-like.” This makes sense considering that human cognition and biology are not fully understood yet. So while it certainly can work an excel spreadsheet, it still does not ‘think’ or act like a human. In fact, the proper way to think about AI is as a high-tech data compiler which ultimately depends on what it’s fed.
It’s also important to consider that Microsoft researchers are incentivised to exaggerate OpenAI’s achievements, whether knowingly or otherwise, given the lucrative relationship between Microsoft and OpenAI.
Scientists also need to address the reality that AGI doesn’t have an agreed-upon definition, and neither does the more general concept of “intelligence.”
The paper reads: “Our claim that GPT-4 represents progress towards AGI does not mean that it is perfect at what it does, or that it comes close to being able to do anything that a human can do (which is one of the usual definitions of AGI), or that it has inner motivation and goals (another key aspect in some definitions of AGI).”
“We believe that GPT-4’s intelligence,” the researchers said, “signals a true paradigm shift in the field of computer science and beyond.”
One small step for man, one giant leap for mankind.