Artificial Intelligence Helps in Learning

                       Artificial Intelligence Helps in Learning How Children Learn
    Alison Gopnik, author of “Making AI Human” in Scientific American’s June issue describes the use of Bayesian statistics to outline how youngsters infer the basics of cause and effect.
     By Alison Gopnik | Scientific American June 2017 Issue

Credit: Blutgruppe Getty Images
In science, we use statistics and experiments to figure out causal relationships. Researchers in artificial intelligence and machine learning have started to design software that allows computers to learn about causes the way that scientists do. Over the past 15 years, researchers in my laboratory have shown that children learn in much the same way.
In one experiment, we showed preschool children a simple machine with a switch on one side and two disks that spin on top. Then we showed them what would happen if you performed some experiments on the machine; for example, you take off one disk, flip the switch and see what happens to the other disk. They could use these data to correctly infer whether flipping the switch to make the blue disk turn would make the yellow disk spin (known as a causal chain structure in statistics) or flipping the switch would make both disks turn (a common cause structure).
Bayesian inference considers both the strength of new evidence and the strength of your existing hypotheses. This characteristic of Bayesian statistics lends them a combination of stability and flexibility. Both toddlers and scientists hold on to well-confirmed hypotheses, but eventually enough new evidence can overturn even the most cherished idea. Several studies show that youngsters integrate existing knowledge and new evidence in this way. Elizabeth Bonawitz of Rutgers University and Laura Schulz of the Massachusetts Institute of Technology found that four-year-olds begin by thinking that a psychological state, such as being anxious, is unlikely to have much effect on their physical well-being, such as making your stomach ache, and reject evidence to the contrary. But if you give them accumulating evidence in favor of this “psychosomatic” hypothesis, they gradually become more open to this suspect idea. A Bayesian model, moreover, can predict just when and how a child’s view will change quite precisely.
In our lab we have also discovered that children can make surprisingly accurate inferences about much more abstract cause-and-effect relationships. Causes can bring about effects in different ways, which we can describe using the “Boolean logic” of computer programmers. For example, a machine might work with an “OR” operation: a block either activates the machine or not. Alternatively, it might use an “AND” operation: it takes a combination of blocks to make the machine go. Preschoolers can learn these abstract principles from statistical patterns. Sometimes they do this better than adults. Bayesian models can predict just how 4-year-olds master this basic computer programming logic.
Significantly, in all these cases children draw the right conclusions after only a few trials, not the thousands needed by so called deep-learning techniques. They seem to do this by combining internalized models of the world around them and the new evidence they constantly receive from their surroundings.
This article was originally published with the title "Artificial Intelligence Helps in Learning How Children Learn"

    Alison Gopnik
Alison Gopnik is a professor of psychology and an affiliate professor of philosophy at the University of California, Berkeley. Her research focuses on how young children learn about the world around them.
Credit: Nick Higgins

Recent Articles
Little Scientists: Babies Have Scientific Minds
How Babies Think

文章来自: 本站原创
引用通告: 查看所有引用 | 我要引用此文章
评论: 0 | 引用: 0 | 查看次数: 55 | 返回顶部
昵 称: 
验证码: =0+3(答案错误内容将被重置)
内 容:
选 项:
字数限制 1000 字 | UBB代码 开启 | [img]标签 开启