Pioneers of AI: McClelland, Rumelhart, and Hinton Honored with 2024 Golden Goose Award
November 20, 2024Rumelhart's earlier work emphasized the simultaneous use of all available information in language understanding, highlighting the importance of context in cognitive processing.
By the 1970s, McClelland began to diverge from mainstream symbolic processing theories, advocating for a model that incorporates context and continuous perception over discrete rules.
Despite the promise of neural networks, McClelland noted that current AI systems require significantly more data to learn compared to humans, pointing out inefficiencies in AI training methods.
He argues that the backpropagation algorithm does not accurately reflect the brain's operations, which involve bi-directional activation and integrated sensory input, prompting his investigation into alternative algorithms.
McClelland continues to explore the similarities and differences between human cognition and AI, focusing on how both systems utilize context in reasoning.
Collaborating with researchers at Google DeepMind, he investigates how both humans and AI benefit from contextual knowledge in reasoning tasks, revealing that both infuse prior knowledge into their thinking.
For their impactful research that laid the groundwork for modern artificial intelligence, McClelland and his colleagues were honored with the 2024 Golden Goose Award.
Despite the innovative advancements in neural networks, their widespread application did not occur until 2012, when improvements in computing power and datasets like ImageNet enabled the training of deep networks.
In the late 1970s and early 1980s, federal funding from agencies like the National Science Foundation and the Office of Naval Research supported groundbreaking research by James McClelland, David Rumelhart, and Geoffrey Hinton to model human cognitive abilities.
This collaboration led to the development of a computational neural network model, demonstrating that neural networks could effectively mimic human cognitive processes.
Their influential 1986 publications introduced the backpropagation algorithm, which became foundational for training neural networks and significantly advanced modern AI systems.
Hinton's contributions during this period included the idea that symbolic representations emerge from the collective activity of many neurons, which was pivotal in their two-volume work.
Summary based on 2 sources
Get a daily email with more AI stories
Sources
Stanford University
The cognitive research behind AI's riseTech Xplore • Nov 20, 2024
The cognitive research behind AI's rise