Learning from Babies to Improve AI

Imagine if AI could train like an infant? AI frameworks are usually created using gigantic data sets filled with billions of data points. However, scientists at a university in New York decided to experiment with a substantially smaller data set: the visual and auditory experiences of a single toddler as they began to speak. What they found was that their AI learned an astonishing amount, thanks to a child named Sam.

The team attached a camera to Sam’s head, which he wore on and off for around 18 months, starting when he was half a year old and continuing until shortly after his second birthday. The footage Sam gathered provided a unique insight for the scientists, helping them train a neural network to associate words with their corresponding objects. According to the report by Cassandra Willyard, the results are worth a look, if only for the overwhelmingly adorable photos!

This study is indicative of the potential role infants could play in advancing computer learning to more closely mirror human learning, possibly leading us towards the creation of AI systems equal to human intellect. Infants, with their innate curiosity and aptitude for learning, have fascinated researchers for some time now. The learning process in children, involving a great deal of trial and error, hints at how human intelligence could mature through continuous acquisition of knowledge about the world. Baby’s innate predictive ability – the expectation that a ball hidden from sight continues to exist, solid and unchanging, and will move along a continuous path rather than teleport randomly, has been the subject of exploration by scientists.

A team from Google DeepMind sought to educate an AI framework about such “intuitive physics”. They trained a model based on videos of objects and their movements, enabling it to learn the behavior of an object. The belief is that an infant’s surprise reaction to an unexpected event, like a ball flying out of the window, is because the object’s behavior is contrary to the child’s understanding of physics. The DeepMind team successfully trained their AI to display “surprise” when objects behaved incongruently with the AI’s learned behavior of objects.

Yann LeCun, a Turing Prize laureate and top AI scientist at Meta, has advocated for AI systems to observe the world similar to how human children do. This, he believes, could lead to more intelligent AI. LeCun suggests that humans have a mental blueprint of the world that informs our understanding of it. He is working on creating entirely new AI architectures that mirror human learning processes.

While current AI systems are adept at specific tasks like playing chess or generating realistic human-written text, they pale in comparison to the complexity of the human brain. These AI systems can be brittle and lack an understanding of common sense, making it tough for them to negotiate the complexities of the real world. Infant learning studies could provide vital insights into addressing this challenge.

In terms of deep learning potential, robots excel in particular tasks such as picking up, shifting objects, and are even improving in areas such as cooking. However, deploying robots to efficiently function in unfamiliar environments with scant data poses significant challenges.