Who knows these children's games?If you grew up in North America in the 20th Century, you probably recognize them as Hot and Cold and Pin the Tail on the Donkey. In Hot and Cold, you hide something in a room and as someone tries to find it, eyes open. You say Hot if they get closer to it and Cold if they move farther away. We use variations to show how close or far they are: Icy cold, Freezing, Boiling hot, and so on. In Pin the Tail on the Donkey, you first blindfold a child. Then you spin them around, put a sharp object in their hand, and let them loose in a room to find a picture of a donkey on the wall and stick it in its rear. People still shout directions, but with no visual cues it's kind of a cacophony. It’s essentially the same game , but in one case there are EXTERNAL GUIDES, and in the other, the child has to GENERATE the hot and cold signals INTERNALLY, in their mind. Shouldn’t be possible, but it is. it’s literally child’s play. So how do they do it? They pick up on cues in the environment. They listen for the sounds of other people. They recognize things that they bump into and reorient themselves based on their position. THEY BUILD A REPRESENTATION OF THE ROOM AND THEIR PLACE IN IT IN THEIR MIND. Heavy stuff. Fortunately, it's common enough that we have a word for it. Now consider the computer and how it might learn when it has no experience. We can give it experience by exposing it to data and asking it to answer a question, e.g., "Is the data a cat or not a cat?" As in Hot and Cold, we can let it know how close or far it is from the right answer, let it adjust and try again. That's called supervised learning. But is it good enough? No--we need the computer to learn on its own, more like Pin the Tail on the Donkey. So the question becomes, can a computer have an imagination? ![]() Oddly enough, they can. It’s called backprop, short for backpropagation or “the backwards propagation of errors.” Instead of getting feedback from an outside observer, we can provide it with an internal loop in which they take what is essentially a guess, see the result, compare that to the previous guess, adjust and take another guess--just like we do. So in the SUPERVISED LEARNING model of AI (Hot and Cold), you might expose a computer to photos of cats and photos with no cats, and label them. The program eventually learns to distinguish photos with cats from those without. Amazing! But with the UNSUPERVISED LEARNING model (Pin the Tail on the Donkey), you would train the computer using ONLY photos with cats, and no label, then ask it to find cat photos among a bunch of mixed ones, and let it apply its EXPERIENCE to sort them out. As you can imagine, it takes a long time to get this right. And the results are very specific. And it is quite amazing what has been accomplished so far. However, I hope that by adding to your experience by providing this background, you can bring it to bear upon any new information you come across about AI. Good luck! Mitch RECAP OF 3 PART BLOG
![]()
umber 2 of a 3-part blog. Let's recap:
So, if the acquisition of logical rules isn’t learning, what is? And the answer is... "Learning is bringing your experience to bear
Say what?
So consider the big box your experience, i.e., all you have learned from interacting with the universe so far. And consider the circle to be any new information you receive by any of your five senses or unknown senses. Your experience is how you interpret that new information: relevant or irrelevant, actionable or ignorable, worth remembering or expendable, etc. Since everyone's experience is different, everyone's interpretation of new information is different as well, which is why it's so hard for us to agree on anything--this theory included.
That would be the end of the story, except for one thing: the big box of experience KEEPS CHANGING all the time, because the little circle of new information KEEPS CHANGING all the time.
So what we really have is a situation like this--a dynamical system where both our internal, remembered and embodied experience and our external, sense-driven experiences—aka, “new information”—keep changing and influencing one another on an ongoing basis.
This how we learn, for instance, that Pluto is no longer a planet. Fortunately, he is still a Disney character, so that's a relief.
How does this apply to AI?
Let’s try the Wason Selection Task again, this time with a slight twist. Your job is to test this rule: “If someone is drinking alcohol, then that person must be age 18 or older.” From where you are standing, you can observe four people: a person drinking soda (you can’t see how old they are); a person drinking beer (you can’t see how old they are); a 30-year-old person (you can’t see what they’re drinking) and a 16-year-old person (you can’t see what they’re drinking). Which of these four items must be checked in order to make sure the rule is being followed?
Answer below. ![]()
You turned over the girl and the beer, right? To see what she was drinking, and to see who is drinking the beer. News flash, this is EXACTLY THE SAME PROBLEM as the original.
People are better at the drinking variation because it gives CONTEXT, with allows you to bring your experience to bear upon .Neat, huh?
You see the problem? If we really want computers to think like we do, we can’t just tell them what to do, we have to make it so they can figure it out for themselves.
How do we do that? Read the next blog and find out! (This is so exciting!) Mitch FYI, I'm speaking at the now-online LEARN2021 conference, albeit by video. Check it out. (NOTE: The dates have changed--they haven't sent an updated promo card.) Who knows these children’s games? The one on the left is Hot and Cold, the one on the right, Pin the Tail on the Donkey. In Hot and Cold, you hide something in a room and as someone tries to find it, you say HOT if they get closer to it and COLD if they move farther away. We use variations to show how close or far they are: Icy cold, Freezing, Boiling hot, and so on. In Pin the Tail on the Donkey, you first blindfold a child. Then you spin them around, put a sharp object in their hand, and let them loose in a room to find a picture of a donkey on the wall and stick it in its rear. People do yell hot and cold, but since they are blindfolded it isn't quite as effective. In a way, you could say it is the same game, but in one case there are external guides and internal feedback (the visual system) for orientation, while in the other there are external guides but no internal feedback. What? No internal feedback? How do they orient themselves without internal feedback???? Easy. THEY MAKE IT UP. Through picking up on cues in the environment, listening for where other people are, recognizing things that they bump into, and so on, they build a representation of the room and their place within it in their mind. Heavy stuff. Fortunately, it's a common enough occurrence that we humans have come up with a word for it: Again, you ask, what does this have to do with AI? Well, since this is number 3 of a three-part blog, let's recap what we've covered so far. Feel free to read the previous blogs for a deeper dive, but essentially we have established the following:
|
What is learning?Many people would agree that learning is about the ability to think, by which they mean to be rational. Scientists have developed many studies to test human rationality. There’s a bunch of them. And in every one, we fail. One is called the Wason Selection Task, and we’re going to do it right now. |
The Wason Selection Task
(Source: Puzzlewocky)
(Source: Puzzlewocky)
Which cards need to be turned over in order to test the truth of the following proposition:
“If one of these cards has an even number on one side then its other side is green.”
Which cards would you turn over, without turning over any cards unnecessarily?
For example, 3 card only; 8 card only; 3 and blue; 8 and green; all the cards
etc.
“If one of these cards has an even number on one side then its other side is green.”
Which cards would you turn over, without turning over any cards unnecessarily?
For example, 3 card only; 8 card only; 3 and blue; 8 and green; all the cards
etc.
Come up with your answer, then scroll down.
Most common answers:
Most common answers:
The correct answer is that you must turn over only the 8 card and the blue card. Here is an explanation for each of the cards:
- 3 card: does not need to be turned over, because it is not even, so it cannot trigger the stated proposition
- 8 card: even, so it must be turned over, because if the other side is not green, then the proposition is not true
- green card: many people choose this card, but it does not need to be turned over, because if the other side is odd, then the proposition is not tested, and if the other side is even, that is consistent with the proposition, but it does not prove or disprove the truth of the proposition
- blue card: does need to be turned over, because if the other side is even, then the proposition is not true
A computer would be great at this problem, right every time. People, only 30%. Ergo, people are irrational.
But let’s see what happens if we turn things around a little bit. We’ll make the RADICAL assumption that people ARE rational, how about that? And if they get the problem wrong this much, there must be something amiss in the researchers’ assumptions.
So what are these assumptions? Basically, it’s this:
But let’s see what happens if we turn things around a little bit. We’ll make the RADICAL assumption that people ARE rational, how about that? And if they get the problem wrong this much, there must be something amiss in the researchers’ assumptions.
So what are these assumptions? Basically, it’s this:
Rationality = Logic
This lines up well with the algorithmic definition of learning, because presumably the rules you need to learn are logical.
I happen to know a guy who epitomizes this concept.

This is Mr. Spock from the TV series, Star Trek. He’s a Vulcan—notice the ears—and on Vulcan, they believe in logic like a religion.
Now, Spock’s foil on the show is Dr. McCoy,

Dr. McCoy is a folksy country doctor stereotype—nicknamed Bones—and as such, he represents emotion.
In many episodes, Spock and McCoy, logic and emotion, end up arguing opposite sides of a problem. To resolve it, the show brings in Captain Kirk.
Dr. McCoy is a folksy country doctor stereotype—nicknamed Bones—and as such, he represents emotion.
In many episodes, Spock and McCoy, logic and emotion, end up arguing opposite sides of a problem. To resolve it, the show brings in Captain Kirk.

So let me ask you.
If Spock brings logic, and Bones brings emotion, what does Kirk bring?
Take your time.
If this were a Kahoot "word salad" I would expect to see writ large: LEADERSHIP. WISDOM. .
And WHAT is the common denominator, according to the literature, of leadership and wisdom?
If this were a Kahoot "word salad" I would expect to see writ large: LEADERSHIP. WISDOM. .
And WHAT is the common denominator, according to the literature, of leadership and wisdom?
Leadership
Wisdom
----------------
Experience
The COMMON DENOMINATOR of BOTH wisdom and leadership is: Experience.
When we look back at our assumptive definition, Learning = Acquiring logical rules, we see that the missing element is experience.
So, if the acquisition of logical rules isn’t learning, what is?
We'll deal with that in the next blog entry.
A la prochaine,
Mitch
P.S. FYI, I'll be speaking about AI in L&D at the Learning Guild's LEARN2021 conference, which is now online (the poster is out of date, but ain't it pretty, though?). Check their website for details.
About Mitch
I'm an eLearning designer, cartoonist, writer, editor, cogsci grad and video maker--and now podcaster!
Share
Archives
January 2022
November 2021
October 2021
September 2021
June 2021
March 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020