The ID Fanatic
Follow on Linkedin
  • Case Studies
  • Podcast
  • Blog
  • Contact

Anyone can teach online, they say...

1/18/2022

0 Comments

 
Picture
0 Comments

AI3: If Experience is the Castle, Imagination is the Key

11/9/2021

1 Comment

 
Picture
Picture

Who knows these children's games?

If you grew up in North America in the 20th Century, you probably recognize them as Hot and Cold and Pin the Tail on the Donkey.

In Hot and Cold, you hide something in a room and as someone tries to find it, eyes open. You say Hot if they get closer to it and Cold if they move farther away. We use variations to show how close or far they are: Icy cold, Freezing, Boiling hot, and so on.

In Pin the Tail on the Donkey, you first blindfold a child. Then you spin them around, put a sharp object in their hand, and let them loose in a room to find a picture of a donkey on the wall and stick it in its rear. People still shout directions, but with no visual cues it's kind of a cacophony.

It’s essentially the  same game , but in one case there are EXTERNAL GUIDES, and in the other, the child has to GENERATE the hot and cold signals INTERNALLY, in their mind. 

Shouldn’t be possible, but it is. it’s literally child’s play.

So how do they do it? 

They pick up on cues in the environment. They listen for the sounds of other people. They recognize things that they bump into and reorient themselves based on their position.

​THEY BUILD A REPRESENTATION OF THE ROOM AND THEIR PLACE IN IT IN THEIR MIND.

Heavy stuff. Fortunately, it's common enough that we have a word for it. 
​
Picture
Now consider the computer and how it might learn when it has no experience.

We can give it experience by exposing it to data and asking it to answer a question, e.g., "Is the data a cat or not a cat?" As in Hot and Cold, we can let it know how close or far it is from the right answer, let it adjust and try again. That's called supervised learning.
But is it good enough? No--we need the computer to learn on its own, more like Pin the Tail on the Donkey. ​So the question becomes, can a computer have an imagination? 
Picture
​Oddly enough, they can. ​It’s called backprop, short for backpropagation or “the backwards propagation of errors.” 

Instead of getting feedback from an outside observer, we can provide it with an internal loop in which they take what is essentially a guess, see the result, compare that to the previous guess, adjust and take another guess--just like we do. 

​So in the SUPERVISED LEARNING model of AI (Hot and Cold), you might expose a computer  to photos of cats and photos with no cats, and label them.

Picture
​The program eventually learns to distinguish photos with cats from those without. Amazing!

But with the UNSUPERVISED LEARNING model (Pin the Tail on the Donkey), you would train the computer using ONLY photos with cats, and no label, then ask it to find cat photos among a bunch of mixed ones, and let it apply its EXPERIENCE to sort them out.
​
As you can imagine, it takes a long time to get this right. And the results are very specific. And it is quite amazing what has been accomplished so far.

However, I hope that by adding to your experience by providing this background, you can bring it to bear upon any new information you come across about AI.
Picture
Good luck!
Mitch
Picture
My session, Driving the Future of AI is part of the Learning 2021 Conference, now happening Nov. 15 to 19 online. Click the logo to register. See you there!

RECAP OF 3 PART BLOG
  1. For most of the 20th century, we thought humans, and therefore machines, learned by being told what to do. This meant that learning was the process of acquiring more and more rules; rules that were rational and therefore governed by logic.
  2. Then we did the Wason Selection Task, demonstrating that either a) humans are irrational, i.e., not good at logic or b) rationality is not governed by logic after all.
  3. We then talked about Star Trek, of course, showing that neither Logic (Spock) nor its supposed nemesis, Emotion (Bones), were enough by themselves for rational decision making. For that, we need the leadership and wisdom that comes from Experience (Kirk).
  4. So, if the acquisition of logical rules isn’t learning, what is? Answer: Learning is bringing your experience to bear upon new information. Since experience and new information are both changing all the time, this forms a dynamical system.
  5. Trying the Wason Selection Task with story based instead of abstract elements, we found that humans are actually rational after all when they are given some context to which to apply their experience.
  6. Comparing the supervised and unsupervised learning models of AI to children's games, we found that the key to giving experience to computers is supplying them with an artificial imagination, if you will, via backprop.
  7. When you think about AI with these constraints in mind, you will hopefully be able to meet new AI tools and claims with more informed and realistic expectations.
1 Comment

AI2: The Missing Link: Experience

10/21/2021

0 Comments

 
Picture
umber 2 of a 3-part blog. Let's recap:
  1. For most of the 20th century, we thought humans, and therefore machines, learned by being told what to do. This meant that learning was the process of acquiring more and more rules; rules that were rational and therefore governed by logic.
  2. Then we did the Wason Selection Task, demonstrating that either a) humans are irrational, i.e., not good at logic or b) rationality is not governed by logic after all.
  3. We then talked about Star Trek, of course, showing that neither Logic (Spock) nor its supposed nemesis, Emotion (Bones), were enough by themselves for rational decision making. For that, we need the leadership and wisdom that comes from Experience (Kirk).
  4. We ended by asking the question:

So, if the acquisition of logical rules isn’t learning, what is? 
​
And the answer is...

Drum Roll Playing Drums GIFfrom Drum Roll GIFs
"Learning is bringing your experience to bear
on new information."​
                                                                                                       Mitch Moldofsky
Say what?
Picture
So consider the big box your experience, i.e., all you have learned from interacting with the universe so far. And consider the circle to be any new information you receive by any of your five senses or unknown senses. Your experience is how you interpret that new information: relevant or irrelevant, actionable or ignorable, worth remembering or expendable, etc. Since everyone's experience is different, everyone's interpretation of new information is different as well, which is why it's so hard for us to agree on anything--this theory included.

That would be the end of the story, except for one thing: the big box of experience KEEPS CHANGING all the time, because the little circle of new information KEEPS CHANGING all the time.
Picture
So what we really have is a situation like this--a dynamical system where both our internal, remembered and embodied experience and our external, sense-driven experiences—aka, “new information”—keep changing and influencing one another on an ongoing basis.

This how we learn, for instance, that Pluto is no longer a planet. Fortunately, he is still a Disney character, so that's a relief.​

via GIPHY

How does this apply to AI?
Let’s try the Wason Selection Task again, this time with a slight twist.

Your job is to test this rule: “If someone is drinking alcohol, then that person must be age 18 or older.” From where you are standing, you can observe four people: a person drinking soda (you can’t see how old they are); a person drinking beer (you can’t see how old they are); a 30-year-old person (you can’t see what they’re drinking) and a 16-year-old person (you can’t see what they’re drinking). Which of these four items must be checked in order to make sure the rule is being followed? 
Picture
Source: Puzzlewocky.com
Answer below.



















PictureOriginal problem: “If one of these cards has an even number on one side then its other side is green.” Which cards would you turn over, without turning over any cards unnecessarily?" Answer: 8 and Blue
​You turned over the girl and the beer, right? To see what she was drinking, and to see who is drinking the beer. News flash, this is EXACTLY THE SAME PROBLEM as the original.

People are better at the drinking variation because it gives CONTEXT, with allows you to bring your experience to bear upon .Neat, huh?


You see the problem? If we really want computers to think like we do, we can’t just tell them what to do, we have to make it so they can figure it out for themselves. 

How do we do that? Read the next blog and find out!

(This is so exciting!)
Mitch

FYI, I'm speaking at the now-online LEARN2021 conference, albeit by video.
​Check it out. (NOTE: The dates have changed--they haven't sent an updated promo card.)
Picture
0 Comments

AI3: Giving the computer a mind of its own

9/29/2021

0 Comments

 
Who knows these children’s games? ​
Picture
Picture
The one on the left is Hot and Cold, the one on the right, Pin the Tail on the Donkey. 

In Hot and Cold, you hide something in a room and as someone tries to find it, you say HOT if they get closer to it and COLD if they move farther away. We use variations to show how close or far they are: Icy cold, Freezing, Boiling hot, and so on.  

In Pin the Tail on the Donkey, you first blindfold a child. Then you spin them around, put a sharp object in their hand, and let them loose in a room to find a picture of a donkey on the wall and stick it in its rear. People do yell hot and cold, but since they are blindfolded it isn't quite as effective.

In a way, you could say it is the same game, but in one case there are external guides and internal feedback (the visual system) for orientation, while in the other there are external guides but no internal feedback.

What? No internal feedback?
How do they orient themselves without internal feedback????


Easy. THEY MAKE IT UP.


Through picking up on cues in the environment, listening for where other people are, recognizing things that they bump into, and so on, they build a representation of the room and their place within it in their mind. 

Heavy stuff. Fortunately, it's a common enough occurrence that we humans have come up with a word for it:
Picture
Again, you ask, what does this have to do with AI? 

Well, since this is number 3 of a three-part blog, let's recap what we've covered so far. Feel free to read the previous blogs for a deeper dive, but essentially we have established the following: 
  1. For most of the 20th century, we thought humans, and therefore machines, learned by being told what to do. This meant that learning was the process of acquiring more and more rules; rules that were rational and therefore governed by logic.
  2. Then we did the Wason Selection Task, demonstrating that either a) humans are irrational, i.e., not good at logic or b) rationality is not governed by logic after all.
  3. We then talked about Star Trek, of course, showing that neither Logic (Spock) nor its supposed nemesis, Emotion (Bones), were enough by themselves for rational decision making. For that, we need the leadership and wisdom that comes from Experience (Kirk).
  4. A new definition of learning was offered: Learning is bringing your experience to bear on new information. We discussed the conundrum of this definition, being that both experience and new information are changing all the time.
  5. We then did the Wason Selection Task a second time with contextual cues (beer and people rather than numbers and colours) to demonstrate the point that experience is essential to rational thought.
  6. We finished by asking how to solve the problem of giving computers experience.


And this is where AI comes in.

For what is the value of experience, anyway? Its value is that it allows you to imagine different possibilities. Applying past experience enables you to imagine the future, whereas without it you draw a blank. You have no VISION. So instead of your vision being supplied by the external world (as in Hot and Cold), your brain supplies it using its internal resources, i.e., imagination (as in Pin the Tail on the Donkey). 
​

What AI research has discovered, essentially, is that people cannot solve problems for the life of them—they basically can't learn anything—without using imagination. 

​The question then becomes, can a computer have an imagination?

​Oddly enough, it can.
Picture
Short for backpropagation, or “the backwards propagation of errors.”  
Picture
Slideteam.net
​Take the matter of cats.Teaching a computer to recognize cats using the Hot and Cold model (aka Supervised Learning) would be a matter of inputting data representing photos of cats labelled "CAT" and photos without cats labelled "NO CAT" Eventually, the computer would be able to tell you whether or not there was a cat in a new photograph, or perhaps in the environment.
Picture
Using the Pin the Tail on the Donkey model (aka Unsupervised Learning), you would input data representing ONLY CATS, without labels, giving it THE EXPERIENCE OF CAT (minus the shredding). Then you would show it pictures of cats and no cats and let it figure it out for itself.

(FYI, this kind of computer program is called a "neural net" because it attempts to simulate how neurons, the building blocks of the brain, work.)

So not to get all philosophical on you, but as I said in the title, this essentially amounts to giving a computer a mind, doesn't it?

Or is that putting Descartes before the horse?

Ooh, I'm getting pun-chy. If you'd like to hear more, I'll be speaking about AI in Learning and Development at the Learning Guild conference, LEARN2021, November 8 in Orlando, FL.

Maybe I'll see you there, with all the other cats.
(It's possible I'll post again before that, talking about AI in L&D. Who nose?)
Mitch
Picture
0 Comments

AI1: Teaching Machines to Learn Like Us

9/24/2021

0 Comments

 
Picture
I, or machine learning, is based on human learning.

You won’t believe it, but this is a radical concept.

Radical because there has been a lot of confusion over just how humans learn.

For most of the 20
th century, we thought humans, and therefore machines, learned by being told what to do.  
 
Ergo, by algorithms               

IF this is so, THEN do this.​

​To wit: 
  • IF a ball is thrown at your head, THEN duck.
  • IF there’s lightning, don’t stand under a tree..  
  • IF a nuclear bomb drops, hide under your desk. 
 
We thought that thinking was a process of having a bunch of these rules in your head and applying them.

This meant that learning, was the process of acquiring more and more rules.
 
​

Learning = Acquiring rules ​

​But it’s not. 

​What is learning?

Many people would agree that learning is about the ability to think, by which they mean to be rational. Scientists have developed many studies to test human rationality. There’s a bunch of them. And in every one, we fail. 
​
One is called the Wason Selection Task, and we’re going to do it right now. 
Picture
The Wason Selection Task  
(Source: Puzzlewocky)
Picture
Which cards need to be turned over in order to test the truth of the following proposition:
“If one of these cards has an even number on one side then its other side is green.”


​Which cards would you turn over, without turning over any cards unnecessarily?

For example, 3 card only; 8 card only; 3 and blue; 8 and 
green; all the cards
etc.

​
Come up with your answer, then scroll down.







​
Most common answers:
Picture
The correct answer is that you must turn over only the 8 card and the blue card. Here is an explanation for each of the cards:
  • 3 card: does not need to be turned over, because it is not even, so it cannot trigger the stated proposition
  • 8 card: even, so it must be turned over, because if the other side is not green, then the proposition is not true
  • green card: many people choose this card, but it does not need to be turned over, because if the other side is odd, then the proposition is not tested, and if the other side is even, that is consistent with the proposition, but it does not prove or disprove the truth of the proposition
  • blue card: does need to be turned over, because if the other side is even, then the proposition is not true
​A computer would be great at this problem, right every time. People, only 30%. Ergo, people are irrational.  
​

But let’s see what happens if we turn things around a little bit. We’ll make the RADICAL assumption that people ARE rational, how about that? And if they get the problem wrong this much, there must be something amiss in the researchers’ assumptions. 

​So what are these assumptions? Basically, it’s this: 
​

Rationality = Logic ​


​​This lines up well with the algorithmic definition of learning, because presumably the rules you need to learn are logical.  

I happen to know a guy who epitomizes this concept. 
Picture

​This is Mr. Spock from the TV series, Star Trek. He’s a Vulcan—notice the ears—and on Vulcan, they believe in logic like a religion.

Now, Spock’s foil on the show is Dr. McCoy,  

Picture
​
​Dr. McCoy is a folksy country doctor stereotype—nicknamed Bones—and as such, he represents emotion.

In many episodes, Spock and McCoy, logic and emotion, end up arguing opposite sides of a problem. To resolve it, the show brings in Captain Kirk. 
​

Picture

​So let me ask you.

If Spock brings logic, and Bones brings emotion, what does Kirk bring? 
​

​Take your time.









If this were a Kahoot "word salad" I would expect to see writ large: LEADERSHIP. WISDOM. .

And WHAT is the common denominator, according to the literature, of leadership and wisdom?
​

Leadership  
Wisdom 
----------------
​Experience 


​The COMMON DENOMINATOR of BOTH wisdom and leadership is: Experience. 

When we look back at our assumptive definition, Learning = Acquiring logical rules, we see that the missing element is experience.  

So, if the acquisition of logical rules isn’t learning, what is? 
​

We'll deal with that in the next blog entry.


A la prochaine,
Mitch

P.S. FYI, I'll be speaking about AI in L&D at the Learning Guild's LEARN2021 conference, which is now online (the poster is out of date, but ain't it pretty, though?). Check their website for details.
Picture
0 Comments
<<Previous
    Picture

    About Mitch

    I'm an eLearning designer, cartoonist, writer, editor, cogsci grad and video maker--and now podcaster!

    RSS Feed

    Share

    Archives

    November 2021
    October 2021
    September 2021
    June 2021
    March 2021
    January 2021
    December 2020
    November 2020
    October 2020
    September 2020
    August 2020

    Categories

    All
    Instructional Design

Powered by Create your own unique website with customizable templates.
Photo used under Creative Commons from Javcon117*