I’m working as one of a team of coaches for a large client where we are introducing a basic “agile” development model. The external coaches and internal (client) mentors are having an off-site event soon to improve our cohesiveness as a team.
One of the things we’re doing to prepare for the event is to take an online self-assessment known as StrengthsFinder, offered by Gallup and based on the book, Strengths Finder 2.0 by Tom Rath. Unfortunately, my top five strengths are:
I say “unfortunately” because I can’t just let the assessment run its course. I feel compelled to take it apart, no doubt as a direct consequence of these particular “strengths.”
My first reaction on seeing the result was to think, “That’s right on the mark.” Almost immediately, it occurred to me that the result appeared to be right on the mark because it was based on my own answers to 177 questions. The assessment was not administered by an objective, qualified third party. How could the result not appear to be right on the mark from my perspective? It could only affirm my own self-image.
It occurs to me the only possible result of any such assessment must be to affirm the respondent’s self-image. A simple computer program matches the answers with scores associated with each of the possible “strengths” in Rath’s model. That’s all. The respondent answers the questions based on his/her own self-image. Naturally, the program spews out some boilerplate text that reflects the same image of the respondent.
The descriptions of the five strengths are presented in very general language. I was reminded of horoscopes, fortune cookies, and those old-time penny fortune scales. Anyone reading them may well feel as if the descriptions apply to them.
But there’s a bit more science to it than that, isn’t there? Or maybe the truth is that there was a bit more science to old-time penny fortune scales than we might assume. At any rate, when Gallup is involved, it costs you more than a penny to get your fortune. Is it worth it?
I decided to carry out a scientific experiment. I visited www.horoscope.com and checked my official horoscope for the day. It read:
If you keep waiting for things to happen, Cancer, you may wake up one morning and realize that your life has passed and you never did half the things you dreamed of doing. The time to take action is now. Put your plan into effect. You may need to make some compromises, but you will find that, in general, people will follow your lead.
Well, whaddayaknow? It’s right on the mark!
I checked my Chinese horoscope as well (I’m Horse), and found this:
If you are feeling stressed and doing too much for others, think about what can be cut out. You do not have to be responsible for people who will not change destructive behavior. A small gesture of encouragement toward someone you admire can open doors of friendship and deeper affection.
Also right on the mark. Isn’t science amazing?
The generic answers reminded me of an old artificial intelligence experiment at MIT. In 1965 a team at MIT created a command-line program called ELIZA, described in Early Artificial Intelligence Projects (.doc), that attempted to carry out a conversation with a human being by recognizing certain keywords in the human’s input and constructing questions and statements in response.
The linked document contains the following account Daniel Bobrow “gave of an unplanned encounter between the Vice President of BBN an abandoned ELIZA program unwittingly left running on the computer screen:”
One Saturday morning about 9 am., a Vice President of Bolt, Beranek, and Newman [BBN]… found a note on the PDP-1 teletype that read: “TYPE ALL MESSAGES INTO THIS TELETYPE AND END THEM WITH A PERIOD.” This note was left there by one of the programming bums who had just succeeded in making run a LISP version of the DOCTOR program (based on ELIZA as developed by Weizenbaum). However, the VP thought that this note indicated that this teletype was linked with the one in my home, since I often use the computer all night. Therefore, he started typing to “me,” but in fact was typing to this version of the DOCTOR program. The following conversation ensued:
vp: Do you think that I can use the computer this morning?
eliza: Why do you ask?
vp: I might be able to make some additional Telcom sales.
eliza: Why aren’t you so sure?
vp: My perspective customers have never seen a demonstration of the system.
eliza: Does this mean very much to you?
vp: Of course it does.
eliza: You seem very positive about that.
vp: This is ridiculous.
eliza: What do you mean, this is ridiculous?
vp: Is there any reason why I can’t use the computer now?
eliza: Tell me more about your perspective customers.
vp: Please dial me up on 491-1850
Note that after the last remark the VP did not obey instructions, and left out the period. Therefore, of course, the computer didn’t answer him. This so infuriated the VP, who thought I was playing games with him, that he called me up, woke me from a deep sleep, and said:
vp: Why are you being so snotty to me?
bobrow: What do you mean I am being snotty to you?
The VP angrily read me the dialog that “we” had been having, and couldn’t get any response but laughter from me. It took a while to convince him it really was a computer.
The story of ELIZA and DOCTOR gets even funnier. People started to confide in DOCTOR their deepest fears and problems. The program responded with statements very similar to those a therapist might make during a counseling session. Even when they were assured DOCTOR was just a piece of software that knew nothing about personal problems, people continued to use the program for self-therapy. They were sure the program understood them and cared about them.
On Yahoo! Answers someone posted the question, “How can you talk like a therapist?” Best answer chosen by voters was: Just keep saying “How does that make you feel” and “What does that mean to you?”
It works. But, why?
A later project at MIT sought to duplicate human emotional responses in a machine. The Kismet head had video and audio sensors and could respond to stimuli from its environment by changing its facial expressions and making sounds. It would show “interest” in new objects in its environment, and gradually lose “interest” as it became habituated to their presence. It would respond “emotionally” to people speaking in different tones of voice (although it was not programmed to try and understand language).
The engineering and the software to support Kismet’s senses and movements were quite challenging, but the programming that mimicked human emotional responses turned out to be pretty straightforward. There were basically nine emotional responses, described here. Various stimuli elicited specific responses according to a simple set of rules.
The fact that programs like DOCTOR, Kismet’s emotion system, and the StrengthsFinder assessment — not to mention fortune cookies — are able to produce credible results could mean they are based on substantial science, or it could mean that we humans are simpler and more predictable machines than we might like to believe.