The ID Fanatic
Follow on Linkedin
  • Case Studies
  • Podcast
  • Blog
  • Contact

AI3: Giving the computer a mind of its own

9/29/2021

0 Comments

 
Who knows these children’s games? ​
Picture
Picture
The one on the left is Hot and Cold, the one on the right, Pin the Tail on the Donkey. 

In Hot and Cold, you hide something in a room and as someone tries to find it, you say HOT if they get closer to it and COLD if they move farther away. We use variations to show how close or far they are: Icy cold, Freezing, Boiling hot, and so on.  

In Pin the Tail on the Donkey, you first blindfold a child. Then you spin them around, put a sharp object in their hand, and let them loose in a room to find a picture of a donkey on the wall and stick it in its rear. People do yell hot and cold, but since they are blindfolded it isn't quite as effective.

In a way, you could say it is the same game, but in one case there are external guides and internal feedback (the visual system) for orientation, while in the other there are external guides but no internal feedback.

What? No internal feedback?
How do they orient themselves without internal feedback????


Easy. THEY MAKE IT UP.


Through picking up on cues in the environment, listening for where other people are, recognizing things that they bump into, and so on, they build a representation of the room and their place within it in their mind. 

Heavy stuff. Fortunately, it's a common enough occurrence that we humans have come up with a word for it:
Picture
Again, you ask, what does this have to do with AI? 

Well, since this is number 3 of a three-part blog, let's recap what we've covered so far. Feel free to read the previous blogs for a deeper dive, but essentially we have established the following: 
  1. For most of the 20th century, we thought humans, and therefore machines, learned by being told what to do. This meant that learning was the process of acquiring more and more rules; rules that were rational and therefore governed by logic.
  2. Then we did the Wason Selection Task, demonstrating that either a) humans are irrational, i.e., not good at logic or b) rationality is not governed by logic after all.
  3. We then talked about Star Trek, of course, showing that neither Logic (Spock) nor its supposed nemesis, Emotion (Bones), were enough by themselves for rational decision making. For that, we need the leadership and wisdom that comes from Experience (Kirk).
  4. A new definition of learning was offered: Learning is bringing your experience to bear on new information. We discussed the conundrum of this definition, being that both experience and new information are changing all the time.
  5. We then did the Wason Selection Task a second time with contextual cues (beer and people rather than numbers and colours) to demonstrate the point that experience is essential to rational thought.
  6. We finished by asking how to solve the problem of giving computers experience.


And this is where AI comes in.

For what is the value of experience, anyway? Its value is that it allows you to imagine different possibilities. Applying past experience enables you to imagine the future, whereas without it you draw a blank. You have no VISION. So instead of your vision being supplied by the external world (as in Hot and Cold), your brain supplies it using its internal resources, i.e., imagination (as in Pin the Tail on the Donkey). 
​

What AI research has discovered, essentially, is that people cannot solve problems for the life of them—they basically can't learn anything—without using imagination. 

​The question then becomes, can a computer have an imagination?

​Oddly enough, it can.
Picture
Short for backpropagation, or “the backwards propagation of errors.”  
Picture
Slideteam.net
​Take the matter of cats.Teaching a computer to recognize cats using the Hot and Cold model (aka Supervised Learning) would be a matter of inputting data representing photos of cats labelled "CAT" and photos without cats labelled "NO CAT" Eventually, the computer would be able to tell you whether or not there was a cat in a new photograph, or perhaps in the environment.
Picture
Using the Pin the Tail on the Donkey model (aka Unsupervised Learning), you would input data representing ONLY CATS, without labels, giving it THE EXPERIENCE OF CAT (minus the shredding). Then you would show it pictures of cats and no cats and let it figure it out for itself.

(FYI, this kind of computer program is called a "neural net" because it attempts to simulate how neurons, the building blocks of the brain, work.)

So not to get all philosophical on you, but as I said in the title, this essentially amounts to giving a computer a mind, doesn't it?

Or is that putting Descartes before the horse?

Ooh, I'm getting pun-chy. If you'd like to hear more, I'll be speaking about AI in Learning and Development at the Learning Guild conference, LEARN2021, November 8 in Orlando, FL.

Maybe I'll see you there, with all the other cats.
(It's possible I'll post again before that, talking about AI in L&D. Who nose?)
Mitch
Picture
0 Comments

AI1: Teaching Machines to Learn Like Us

9/24/2021

0 Comments

 
Picture
I, or machine learning, is based on human learning.

You won’t believe it, but this is a radical concept.

Radical because there has been a lot of confusion over just how humans learn.

For most of the 20
th century, we thought humans, and therefore machines, learned by being told what to do.  
 
Ergo, by algorithms               

IF this is so, THEN do this.​

​To wit: 
  • IF a ball is thrown at your head, THEN duck.
  • IF there’s lightning, don’t stand under a tree..  
  • IF a nuclear bomb drops, hide under your desk. 
 
We thought that thinking was a process of having a bunch of these rules in your head and applying them.

This meant that learning, was the process of acquiring more and more rules.
 
​

Learning = Acquiring rules ​

​But it’s not. 

​What is learning?

Many people would agree that learning is about the ability to think, by which they mean to be rational. Scientists have developed many studies to test human rationality. There’s a bunch of them. And in every one, we fail. 
​
One is called the Wason Selection Task, and we’re going to do it right now. 
Picture
The Wason Selection Task  
(Source: Puzzlewocky)
Picture
Which cards need to be turned over in order to test the truth of the following proposition:
“If one of these cards has an even number on one side then its other side is green.”


​Which cards would you turn over, without turning over any cards unnecessarily?

For example, 3 card only; 8 card only; 3 and blue; 8 and 
green; all the cards
etc.

​
Come up with your answer, then scroll down.







​
Most common answers:
Picture
The correct answer is that you must turn over only the 8 card and the blue card. Here is an explanation for each of the cards:
  • 3 card: does not need to be turned over, because it is not even, so it cannot trigger the stated proposition
  • 8 card: even, so it must be turned over, because if the other side is not green, then the proposition is not true
  • green card: many people choose this card, but it does not need to be turned over, because if the other side is odd, then the proposition is not tested, and if the other side is even, that is consistent with the proposition, but it does not prove or disprove the truth of the proposition
  • blue card: does need to be turned over, because if the other side is even, then the proposition is not true
​A computer would be great at this problem, right every time. People, only 30%. Ergo, people are irrational.  
​

But let’s see what happens if we turn things around a little bit. We’ll make the RADICAL assumption that people ARE rational, how about that? And if they get the problem wrong this much, there must be something amiss in the researchers’ assumptions. 

​So what are these assumptions? Basically, it’s this: 
​

Rationality = Logic ​


​​This lines up well with the algorithmic definition of learning, because presumably the rules you need to learn are logical.  

I happen to know a guy who epitomizes this concept. 
Picture

​This is Mr. Spock from the TV series, Star Trek. He’s a Vulcan—notice the ears—and on Vulcan, they believe in logic like a religion.

Now, Spock’s foil on the show is Dr. McCoy,  

Picture
​
​Dr. McCoy is a folksy country doctor stereotype—nicknamed Bones—and as such, he represents emotion.

In many episodes, Spock and McCoy, logic and emotion, end up arguing opposite sides of a problem. To resolve it, the show brings in Captain Kirk. 
​

Picture

​So let me ask you.

If Spock brings logic, and Bones brings emotion, what does Kirk bring? 
​

​Take your time.









If this were a Kahoot "word salad" I would expect to see writ large: LEADERSHIP. WISDOM. .

And WHAT is the common denominator, according to the literature, of leadership and wisdom?
​

Leadership  
Wisdom 
----------------
​Experience 


​The COMMON DENOMINATOR of BOTH wisdom and leadership is: Experience. 

When we look back at our assumptive definition, Learning = Acquiring logical rules, we see that the missing element is experience.  

So, if the acquisition of logical rules isn’t learning, what is? 
​

We'll deal with that in the next blog entry.


A la prochaine,
Mitch

P.S. FYI, I'll be speaking about AI in L&D at the Learning Guild's LEARN2021 conference, which is now online (the poster is out of date, but ain't it pretty, though?). Check their website for details.
Picture
0 Comments
    Picture

    About Mitch

    I'm an eLearning designer, cartoonist, writer, editor, cogsci grad and video maker--and now podcaster!

    RSS Feed

    Share

    Archives

    November 2021
    October 2021
    September 2021
    June 2021
    March 2021
    January 2021
    December 2020
    November 2020
    October 2020
    September 2020
    August 2020

    Categories

    All
    Instructional Design

Powered by Create your own unique website with customizable templates.
Photo used under Creative Commons from Javcon117*