
n our last blog, we talked about the 20th Century conceptualization of learning which was based on algorithms:
Learning = Acquiring rules
And of course these rules had to be logical, because:
Rationality = Logic
Learning = Acquiring rules
And of course these rules had to be logical, because:
Rationality = Logic
We then did the Wason Sorting Task. In case you didn't read the earlier blog, here it is again.
Solve the following
To test the proposition, “If one of these cards has an even number on one side
then its other side is green,” which cards would you turn over without turning
over any cards unnecessarily?
To test the proposition, “If one of these cards has an even number on one side
then its other side is green,” which cards would you turn over without turning
over any cards unnecessarily?
I'll give you a second to think about it. Answer below.
Answer: 8 and Blue, because no matter what's on the other side of 3 or Green, it doesn't disprove the statement.
Only 30% of people get this right on the first try, which goes to prove that either
Answer: 8 and Blue, because no matter what's on the other side of 3 or Green, it doesn't disprove the statement.
Only 30% of people get this right on the first try, which goes to prove that either
- A) people aren't rational or
- B) rationality isn't logic after all.
Finally, we asked the begging question:
If the acquisition of logical rules isn’t learning, what is?
And the answer, according to cognitive science, is....
If the acquisition of logical rules isn’t learning, what is?
And the answer, according to cognitive science, is....
Learning is bringing your experience to bear on new information.
Learning is bringing your experience to bear on new information.
We can represent it like this:
Simple right? Only the contents of THE BOX keep CHANGING. Why?
Because the contents of THE CIRCLE keep CHANGING!
So really, it’s a two way street.
Because the contents of THE CIRCLE keep CHANGING!
So really, it’s a two way street.
This is what we call a Dynamical System, where both our internal, embodied experience and our external, sense-driven experiences keep changing and influencing one another on an ongoing basis.
I suppose you're wondering, "How does this apply to AI?"
To get there, let’s try the Wason Selection Task one more time, this time with a twist.
Again, we borrow from Puzzlewocky.
I suppose you're wondering, "How does this apply to AI?"
To get there, let’s try the Wason Selection Task one more time, this time with a twist.
Again, we borrow from Puzzlewocky.
Solve the following
You work in a bar. Your job is to enforce this rule:
“If someone is drinking alcohol, then that person must be age 18 or older.”
From where you are standing, you can observe four people:
Which of these four items must be checked, at a minimum,
in order to make sure the rule is being followed?
You work in a bar. Your job is to enforce this rule:
“If someone is drinking alcohol, then that person must be age 18 or older.”
From where you are standing, you can observe four people:
- a person drinking soda (you can’t see how old they are);
- a person drinking beer (you can’t see how old they are);
- a 30-year-old person (you can’t see what they’re drinking) and
- a 16-year-old person (you can’t see what they’re drinking).
Which of these four items must be checked, at a minimum,
in order to make sure the rule is being followed?
Answer below.
The beer and the girl, right? You need to check that the beer is being drunk by an adult, and that the girl isn't drinking alcohol.
Surprise! This is exactly the same as the previous question, only people answer close to 100% rather than 30%.
Why?
Because it gives context, something to which we can bring—that’s right--our experience.
You see the problem? If we really want computers to think like we do, we can’t just tell them what to do. We have to make it so they can figure it out themselves.
The beer and the girl, right? You need to check that the beer is being drunk by an adult, and that the girl isn't drinking alcohol.
Surprise! This is exactly the same as the previous question, only people answer close to 100% rather than 30%.
Why?
Because it gives context, something to which we can bring—that’s right--our experience.
You see the problem? If we really want computers to think like we do, we can’t just tell them what to do. We have to make it so they can figure it out themselves.
How the heck can we do that? We'll cover that in the next blog.
Life is not a box of chocolates.
Mitch
P.S. As mentioned in the first blog in this series, I'll be speaking about AI in L&D at the Learning Guild's LEARN2021 conference in Orlando FL, November 8. If you're there, say hi and we can talk further.
Life is not a box of chocolates.
Mitch
P.S. As mentioned in the first blog in this series, I'll be speaking about AI in L&D at the Learning Guild's LEARN2021 conference in Orlando FL, November 8. If you're there, say hi and we can talk further.