
lan Turing (the subject of the rightfully award-winning movie, The Imitation Game, on Netflix, which basically credits him with winning WWII for the British by leading the development of the first computer to crack an unsolvable German code and then being outcast because he was gay) is mainly known as the creator of the Turing Test, the first Artificial Intelligence test.
The basic premise was that an individual conversing with a person and a computer, both hidden by screens, would not be able to tell which was which.
The test was the basis for the "Voight-Kampff Test" used in the 1992 movie Blade Runner, an entertaining clip of which is provided below (I wonder how they came up with that name?).
So a question to ask is: How does the computer or robot go about fooling the observer? And the answer is, By guessing.
The computer must be able to listen to and parse the language spoken and develop a response based on the words used in the way that they are used without a degree of hesitation that would tip the listener off. To do this, it needs a s***load of data. Now the logical way to do this would be to provide a databank of words and rules as to how they were used in the language and some algorithms to direct it in composing a reply. However, this is not the way it is done. It is done rather buy providing program with millions of sentences in context from different sorts of printed matter and letting it make its best guess.
Which is exactly what the human behind the screen is doing.
Take the Blade Runner interview. The detective's objective is to find out whether the person opposite is a robot or a human. The robot has to answer biographical questions based on implanted memories, and the interviewer has to determine whether these guesses at human-like responses are real enough. If a human is being interviewed, the truth is they are also guessing at the answers based on their own faulty memories. These memories, whether real or implanted, may be considered "top down" knowledge, the interview questions "bottom up" inputs.
People develop top down systems their whole lives to deal with new bottom up inputs.
The type of training AI systems go through builds top-down knowledge systems on massive inputs rather than years-long experience.
What I'm saying is, we don't know things any more than the computer knows things. Our knowledge is based on years of input gathering and pattern recognition, figuring things out by guessing and verifying over and over again. But because our conclusions are based on our unique experience profiles they are not necessarily the same as our neighbor's, which is why you have Liberals and Conservatives.
So if this is how people naturally learn, i.e., by experiencing things and figuring it out through guesses and verification, how can we L&D folk harness it?
A simplistic reading would lead to rules of thumb like teach by doing or teach by example. Going a little farther, we would include things like teach by making mistakes and teach by bad examples.
But digging deeper, we have to recognize that the human capacity for finding order in chaos, of finding patterns within disorganized material, for making sense of nonsense, may not be entirely practical as an instructional approach. It takes too long. Life experience teaches this way, which is why some people insist that apprenticeship is the only way to learn certain jobs; and they may not be wrong.
However, it should make us question our main technique of spoon feeding content and checking for comprehension or application. This clearly does not jive with natural learning. Even the average simulation is not deep or wide enough to take advantage.
So what to do?
I don't know, I'm must bringing it up as something worth thinking about.
; )
But while I'm on the subject of AI, we should be thinking as a community about how AI can be used to improve our products. The challenge is to think of problems that have been extremely resistant to past IF-THEN approaches that may be accessible using an AI lens.
What comes to mind for me is that I have always been overly hopeful and therefore overly disappointed in the lack of our ability to screen students for preexisting knowledge and use that to customize the learning for them alone. Bespoke learning is an area that elearning has always seemed to me to be perfect for, and yet we are still providing standard lessons to everyone, with some branching, perhaps, based on their job description or things like that.
If any of you have some barrier busting hunches as to how AI might be applied, I'd love to see them in the comments.
Peace, and get vaccinated whenever you can.
Mitch (Not a robot.)
The basic premise was that an individual conversing with a person and a computer, both hidden by screens, would not be able to tell which was which.
The test was the basis for the "Voight-Kampff Test" used in the 1992 movie Blade Runner, an entertaining clip of which is provided below (I wonder how they came up with that name?).
So a question to ask is: How does the computer or robot go about fooling the observer? And the answer is, By guessing.
The computer must be able to listen to and parse the language spoken and develop a response based on the words used in the way that they are used without a degree of hesitation that would tip the listener off. To do this, it needs a s***load of data. Now the logical way to do this would be to provide a databank of words and rules as to how they were used in the language and some algorithms to direct it in composing a reply. However, this is not the way it is done. It is done rather buy providing program with millions of sentences in context from different sorts of printed matter and letting it make its best guess.
Which is exactly what the human behind the screen is doing.
Take the Blade Runner interview. The detective's objective is to find out whether the person opposite is a robot or a human. The robot has to answer biographical questions based on implanted memories, and the interviewer has to determine whether these guesses at human-like responses are real enough. If a human is being interviewed, the truth is they are also guessing at the answers based on their own faulty memories. These memories, whether real or implanted, may be considered "top down" knowledge, the interview questions "bottom up" inputs.
People develop top down systems their whole lives to deal with new bottom up inputs.
The type of training AI systems go through builds top-down knowledge systems on massive inputs rather than years-long experience.
What I'm saying is, we don't know things any more than the computer knows things. Our knowledge is based on years of input gathering and pattern recognition, figuring things out by guessing and verifying over and over again. But because our conclusions are based on our unique experience profiles they are not necessarily the same as our neighbor's, which is why you have Liberals and Conservatives.
So if this is how people naturally learn, i.e., by experiencing things and figuring it out through guesses and verification, how can we L&D folk harness it?
A simplistic reading would lead to rules of thumb like teach by doing or teach by example. Going a little farther, we would include things like teach by making mistakes and teach by bad examples.
But digging deeper, we have to recognize that the human capacity for finding order in chaos, of finding patterns within disorganized material, for making sense of nonsense, may not be entirely practical as an instructional approach. It takes too long. Life experience teaches this way, which is why some people insist that apprenticeship is the only way to learn certain jobs; and they may not be wrong.
However, it should make us question our main technique of spoon feeding content and checking for comprehension or application. This clearly does not jive with natural learning. Even the average simulation is not deep or wide enough to take advantage.
So what to do?
I don't know, I'm must bringing it up as something worth thinking about.
; )
But while I'm on the subject of AI, we should be thinking as a community about how AI can be used to improve our products. The challenge is to think of problems that have been extremely resistant to past IF-THEN approaches that may be accessible using an AI lens.
What comes to mind for me is that I have always been overly hopeful and therefore overly disappointed in the lack of our ability to screen students for preexisting knowledge and use that to customize the learning for them alone. Bespoke learning is an area that elearning has always seemed to me to be perfect for, and yet we are still providing standard lessons to everyone, with some branching, perhaps, based on their job description or things like that.
If any of you have some barrier busting hunches as to how AI might be applied, I'd love to see them in the comments.
Peace, and get vaccinated whenever you can.
Mitch (Not a robot.)