Recent public interest in tools like ChatGPT has raised an old question in the artificial intelligence community: is artificial general intelligence (in this case, AI that performs at human level) achievable?
Recent public interest in tools like ChatGPT has raised an old question in the artificial intelligence community: is artificial general intelligence (in this case, AI that performs at human level) achievable?
An online preprint this week has added to the hype, suggesting the latest advanced large language model, GPT-4, is at the early stages of artificial general intelligence (AGI) as it’s exhibiting “sparks of intelligence.”
OpenAI, the company behind ChatGPT, has unabashedly declared its pursuit of AGI. Meanwhile, a large number of researchers and public intellectuals have called for an immediate halt to the development of these models, citing “profound risks to society and humanity”. These calls to pause AI research are theatrical and unlikely to succeed—the allure of advanced intelligence is too provocative for humans to ignore, and too rewarding for companies to pause.
But are the worries and hopes about AGI warranted? How close is GPT-4, and AI more broadly, to general human intelligence?
If human cognitive capacity is a landscape, AI has indeed increasingly taken over large swaths of this territory. It can now perform many separate cognitive tasks better than humans in domains of vision, image recognition, reasoning, reading comprehension and game playing. These AI skills could potentially result in a dramatic reordering of the global labor market in less than ten years.
But there are at least two ways of viewing the AGI issue.
First is that over time, AI will develop skills and capabilities for learning that match those of humans, and reach AGI level. The expectation is the uniquely human ability for ongoing development, learning and transferring learning from one domain to another will eventually be duplicated by AI.