Домой United States USA — software Computer scientists are questioning whether Alphabet’s DeepMind will ever make A.I. more...

Computer scientists are questioning whether Alphabet’s DeepMind will ever make A.I. more human-like

240
0
ПОДЕЛИТЬСЯ

Computer scientists are questioning whether DeepMind will ever be able to make machines with the kind of «general» intelligence seen in humans and animals.
Computer scientists are questioning whether DeepMind, the Alphabet -owned U.K. firm that’s widely regarded as one of the world’s premier AI labs, will ever be able to make machines with the kind of «general» intelligence seen in humans and animals. In its quest for artificial general intelligence, which is sometimes called human-level AI, DeepMind is focusing a chunk of its efforts on an approach called «reinforcement learning.» This involves programming an AI to take certain actions in order to maximize its chance of earning a reward in a certain situation. In other words, the algorithm «learns» to complete a task by seeking out these preprogrammed rewards. The technique has been successfully used to train AI models how to play (and excel at) games like Go and chess. But they remain relatively dumb, or «narrow.» DeepMind’s famous AlphaGo AI can’t draw a stickman or tell the difference between a cat and a rabbit, for example, while a seven-year-old can. Despite this, DeepMind, which was acquired by Google in 2014 for around $600 million, believes that AI systems underpinned by reinforcement learning could theoretically grow and learn so much that they break the theoretical barrier to AGI without any new technological developments. Researchers at the company, which has grown to around 1,000 people under Alphabet’s ownership, argued in a paper submitted to the peer-reviewed Artificial Intelligence journal last month that «Reward is enough» to reach general AI. The paper was first reported by VentureBeat last week. In the paper, the researchers claim that if you keep «rewarding» an algorithm each time it does something you want it to, which is the essence of reinforcement learning, then it will eventually start to show signs of general intelligence. «Reward is enough to drive behavior that exhibits abilities studied in natural and artificial intelligence, including knowledge, learning, perception, social intelligence, language, generalization and imitation,» the authors write. «We suggest that agents that learn through trial and error experience to maximize reward could learn behavior that exhibits most if not all of these abilities, and therefore that powerful reinforcement learning agents could constitute a solution to artificial general intelligence.

Continue reading...