Who knows these children's games?
If you grew up in North America in the 20th Century, you probably recognize them as Hot and Cold and Pin the Tail on the Donkey.
In Hot and Cold, you hide something in a room and as someone tries to find it, eyes open. You say Hot if they get closer to it and Cold if they move farther away. We use variations to show how close or far they are: Icy cold, Freezing, Boiling hot, and so on.
In Pin the Tail on the Donkey, you first blindfold a child. Then you spin them around, put a sharp object in their hand, and let them loose in a room to find a picture of a donkey on the wall and stick it in its rear. People still shout directions, but with no visual cues it's kind of a cacophony.
It’s essentially the same game , but in one case there are EXTERNAL GUIDES, and in the other, the child has to GENERATE the hot and cold signals INTERNALLY, in their mind.
Shouldn’t be possible, but it is. it’s literally child’s play.
So how do they do it?
They pick up on cues in the environment. They listen for the sounds of other people. They recognize things that they bump into and reorient themselves based on their position.
THEY BUILD A REPRESENTATION OF THE ROOM AND THEIR PLACE IN IT IN THEIR MIND.
Heavy stuff. Fortunately, it's common enough that we have a word for it.
In Hot and Cold, you hide something in a room and as someone tries to find it, eyes open. You say Hot if they get closer to it and Cold if they move farther away. We use variations to show how close or far they are: Icy cold, Freezing, Boiling hot, and so on.
In Pin the Tail on the Donkey, you first blindfold a child. Then you spin them around, put a sharp object in their hand, and let them loose in a room to find a picture of a donkey on the wall and stick it in its rear. People still shout directions, but with no visual cues it's kind of a cacophony.
It’s essentially the same game , but in one case there are EXTERNAL GUIDES, and in the other, the child has to GENERATE the hot and cold signals INTERNALLY, in their mind.
Shouldn’t be possible, but it is. it’s literally child’s play.
So how do they do it?
They pick up on cues in the environment. They listen for the sounds of other people. They recognize things that they bump into and reorient themselves based on their position.
THEY BUILD A REPRESENTATION OF THE ROOM AND THEIR PLACE IN IT IN THEIR MIND.
Heavy stuff. Fortunately, it's common enough that we have a word for it.
Now consider the computer and how it might learn when it has no experience.
We can give it experience by exposing it to data and asking it to answer a question, e.g., "Is the data a cat or not a cat?" As in Hot and Cold, we can let it know how close or far it is from the right answer, let it adjust and try again. That's called supervised learning.
We can give it experience by exposing it to data and asking it to answer a question, e.g., "Is the data a cat or not a cat?" As in Hot and Cold, we can let it know how close or far it is from the right answer, let it adjust and try again. That's called supervised learning.
But is it good enough? No--we need the computer to learn on its own, more like Pin the Tail on the Donkey. So the question becomes, can a computer have an imagination?

Oddly enough, they can. It’s called backprop, short for backpropagation or “the backwards propagation of errors.”
Instead of getting feedback from an outside observer, we can provide it with an internal loop in which they take what is essentially a guess, see the result, compare that to the previous guess, adjust and take another guess--just like we do.
So in the SUPERVISED LEARNING model of AI (Hot and Cold), you might expose a computer to photos of cats and photos with no cats, and label them.
Instead of getting feedback from an outside observer, we can provide it with an internal loop in which they take what is essentially a guess, see the result, compare that to the previous guess, adjust and take another guess--just like we do.
So in the SUPERVISED LEARNING model of AI (Hot and Cold), you might expose a computer to photos of cats and photos with no cats, and label them.
The program eventually learns to distinguish photos with cats from those without. Amazing!
But with the UNSUPERVISED LEARNING model (Pin the Tail on the Donkey), you would train the computer using ONLY photos with cats, and no label, then ask it to find cat photos among a bunch of mixed ones, and let it apply its EXPERIENCE to sort them out.
As you can imagine, it takes a long time to get this right. And the results are very specific. And it is quite amazing what has been accomplished so far.
However, I hope that by adding to your experience by providing this background, you can bring it to bear upon any new information you come across about AI.
But with the UNSUPERVISED LEARNING model (Pin the Tail on the Donkey), you would train the computer using ONLY photos with cats, and no label, then ask it to find cat photos among a bunch of mixed ones, and let it apply its EXPERIENCE to sort them out.
As you can imagine, it takes a long time to get this right. And the results are very specific. And it is quite amazing what has been accomplished so far.
However, I hope that by adding to your experience by providing this background, you can bring it to bear upon any new information you come across about AI.
Good luck!
Mitch
Mitch
RECAP OF 3 PART BLOG
- For most of the 20th century, we thought humans, and therefore machines, learned by being told what to do. This meant that learning was the process of acquiring more and more rules; rules that were rational and therefore governed by logic.
- Then we did the Wason Selection Task, demonstrating that either a) humans are irrational, i.e., not good at logic or b) rationality is not governed by logic after all.
- We then talked about Star Trek, of course, showing that neither Logic (Spock) nor its supposed nemesis, Emotion (Bones), were enough by themselves for rational decision making. For that, we need the leadership and wisdom that comes from Experience (Kirk).
- So, if the acquisition of logical rules isn’t learning, what is? Answer: Learning is bringing your experience to bear upon new information. Since experience and new information are both changing all the time, this forms a dynamical system.
- Trying the Wason Selection Task with story based instead of abstract elements, we found that humans are actually rational after all when they are given some context to which to apply their experience.
- Comparing the supervised and unsupervised learning models of AI to children's games, we found that the key to giving experience to computers is supplying them with an artificial imagination, if you will, via backprop.
- When you think about AI with these constraints in mind, you will hopefully be able to meet new AI tools and claims with more informed and realistic expectations.