Who knows these children’s games?
The one on the left is Hot and Cold, the one on the right, Pin the Tail on the Donkey.
In Hot and Cold, you hide something in a room and as someone tries to find it, you say HOT if they get closer to it and COLD if they move farther away. We use variations to show how close or far they are: Icy cold, Freezing, Boiling hot, and so on.
In Pin the Tail on the Donkey, you first blindfold a child. Then you spin them around, put a sharp object in their hand, and let them loose in a room to find a picture of a donkey on the wall and stick it in its rear. People do yell hot and cold, but since they are blindfolded it isn't quite as effective.
In a way, you could say it is the same game, but in one case there are external guides and internal feedback (the visual system) for orientation, while in the other there are external guides but no internal feedback.
What? No internal feedback?
How do they orient themselves without internal feedback????
Easy. THEY MAKE IT UP.
Through picking up on cues in the environment, listening for where other people are, recognizing things that they bump into, and so on, they build a representation of the room and their place within it in their mind.
Heavy stuff. Fortunately, it's a common enough occurrence that we humans have come up with a word for it:
In Hot and Cold, you hide something in a room and as someone tries to find it, you say HOT if they get closer to it and COLD if they move farther away. We use variations to show how close or far they are: Icy cold, Freezing, Boiling hot, and so on.
In Pin the Tail on the Donkey, you first blindfold a child. Then you spin them around, put a sharp object in their hand, and let them loose in a room to find a picture of a donkey on the wall and stick it in its rear. People do yell hot and cold, but since they are blindfolded it isn't quite as effective.
In a way, you could say it is the same game, but in one case there are external guides and internal feedback (the visual system) for orientation, while in the other there are external guides but no internal feedback.
What? No internal feedback?
How do they orient themselves without internal feedback????
Easy. THEY MAKE IT UP.
Through picking up on cues in the environment, listening for where other people are, recognizing things that they bump into, and so on, they build a representation of the room and their place within it in their mind.
Heavy stuff. Fortunately, it's a common enough occurrence that we humans have come up with a word for it:
Again, you ask, what does this have to do with AI?
Well, since this is number 3 of a three-part blog, let's recap what we've covered so far. Feel free to read the previous blogs for a deeper dive, but essentially we have established the following:
Well, since this is number 3 of a three-part blog, let's recap what we've covered so far. Feel free to read the previous blogs for a deeper dive, but essentially we have established the following:
- For most of the 20th century, we thought humans, and therefore machines, learned by being told what to do. This meant that learning was the process of acquiring more and more rules; rules that were rational and therefore governed by logic.
- Then we did the Wason Selection Task, demonstrating that either a) humans are irrational, i.e., not good at logic or b) rationality is not governed by logic after all.
- We then talked about Star Trek, of course, showing that neither Logic (Spock) nor its supposed nemesis, Emotion (Bones), were enough by themselves for rational decision making. For that, we need the leadership and wisdom that comes from Experience (Kirk).
- A new definition of learning was offered: Learning is bringing your experience to bear on new information. We discussed the conundrum of this definition, being that both experience and new information are changing all the time.
- We then did the Wason Selection Task a second time with contextual cues (beer and people rather than numbers and colours) to demonstrate the point that experience is essential to rational thought.
- We finished by asking how to solve the problem of giving computers experience.
And this is where AI comes in.
For what is the value of experience, anyway? Its value is that it allows you to imagine different possibilities. Applying past experience enables you to imagine the future, whereas without it you draw a blank. You have no VISION. So instead of your vision being supplied by the external world (as in Hot and Cold), your brain supplies it using its internal resources, i.e., imagination (as in Pin the Tail on the Donkey).
What AI research has discovered, essentially, is that people cannot solve problems for the life of them—they basically can't learn anything—without using imagination.
The question then becomes, can a computer have an imagination?
Oddly enough, it can.
What AI research has discovered, essentially, is that people cannot solve problems for the life of them—they basically can't learn anything—without using imagination.
The question then becomes, can a computer have an imagination?
Oddly enough, it can.
Short for backpropagation, or “the backwards propagation of errors.”
Take the matter of cats.Teaching a computer to recognize cats using the Hot and Cold model (aka Supervised Learning) would be a matter of inputting data representing photos of cats labelled "CAT" and photos without cats labelled "NO CAT" Eventually, the computer would be able to tell you whether or not there was a cat in a new photograph, or perhaps in the environment.
Using the Pin the Tail on the Donkey model (aka Unsupervised Learning), you would input data representing ONLY CATS, without labels, giving it THE EXPERIENCE OF CAT (minus the shredding). Then you would show it pictures of cats and no cats and let it figure it out for itself.
(FYI, this kind of computer program is called a "neural net" because it attempts to simulate how neurons, the building blocks of the brain, work.)
So not to get all philosophical on you, but as I said in the title, this essentially amounts to giving a computer a mind, doesn't it?
Or is that putting Descartes before the horse?
Ooh, I'm getting pun-chy. If you'd like to hear more, I'll be speaking about AI in Learning and Development at the Learning Guild conference, LEARN2021, November 8 in Orlando, FL.
Maybe I'll see you there, with all the other cats.
(It's possible I'll post again before that, talking about AI in L&D. Who nose?)
Mitch
(FYI, this kind of computer program is called a "neural net" because it attempts to simulate how neurons, the building blocks of the brain, work.)
So not to get all philosophical on you, but as I said in the title, this essentially amounts to giving a computer a mind, doesn't it?
Or is that putting Descartes before the horse?
Ooh, I'm getting pun-chy. If you'd like to hear more, I'll be speaking about AI in Learning and Development at the Learning Guild conference, LEARN2021, November 8 in Orlando, FL.
Maybe I'll see you there, with all the other cats.
(It's possible I'll post again before that, talking about AI in L&D. Who nose?)
Mitch