Wednesday, December 31, 2008

When Daddy's tour in Iraq is extended yet again...

Dear Dr. XX,

Thank you for asking me to consult on aspects of children's socio-emotional development relevant to the Department of Defense's solicitation titled, "Virtual Dialogue Application for Families of Deployed Service Members."

I was appreciative of the fact that the DoD opened with a historical perspective:

The ability to reach out and communicate with loved ones from areas of conflict is better than at any time in history. Nevertheless, the stresses of deployment might be softened if spouses and especially children could conduct simple conversations with their loved ones in immediate times of stress or prolonged absence. Historically, families have derived comfort and support from photographs or mementos, but current technology SHOULD allow for more personal interactive messages of support. Over 80% of American children between the ages of three and five regularly use computers, and 83% of families have a computer in their home. So, computer-based applications would resonate with children and capture their interest and imagination. The challenge is to design an application that would allow a child to receive comfort from being able to have simple, virtual conversations with a parent who is not available "in-person".


It's wonderful that the military is prepared to spend substantial money to improve the well-being of military families. They may hope families will be more tolerant of repeated and prolonged tours if they can speak with a state-of-the-art artificial intelligence.

Still, I confess I am skeptical of the utility of an artificial intelligence program which mimics parental dialogue. Is there any evidence that children age 3-5 will understand that the avatar on the screen is supposed to be their parent? I wouldn't envy the job of a mother who has to train her 3 year old to comprehend this. (Just think of the bright happy forced energy required in "Let's go say good-night to Daddy!") And once children do understand, how will they sort out that this is an artificial intelligence, not really their parent? Recall young children's difficulty with the "appearance-reality distinction" (documented by Piaget and the American developmental psychologist John Flavell). This may create more confusions: where/what is my father?

Great background-story for a dystopian novel: In the early 21st century, when the protagonist was only three, he was beta-tested on a military AI project...

I saw that spouses were mentioned as possible beneficiaries of the proposed product. Would spouses really take comfort from an AI program saying the couple's tender phrases? Or perhaps the topic proposers are remembering a book they read in the 1970s, "Stepford Wives" -- the AI will be better than a human being because it can cater to the fantasies of the left-behind-spouse.

From a research standpoint, the first step in phase 1, "1. Develop metrics to determine user acceptance, usability, and content requirements" should be completed before anything else is attempted. Indeed, a simple questionnaire for armed services personnel could be sufficient to find out whether an AI could "soften the stresses of deployment." I was surprised this hasn't been done -- but indeed, the references section was about what one would expect from an undergraduate term paper -- idiosyncratic, showing little awareness of background research in the relevant scientific disciplines.

One more question: Do you have any thoughts on why the DoD does not simply try to provide (more) real-time video computer connection (as is possible with skype and ichat)?

Sincerely,

HumanProject

Note: I learned about this project by receiving a request to be a consultant, but after googling saw that bloggers have already called it a "Creepy government project."

(And I really wanted to development AIs back when I was a teenager at the dawn of modern computing...)

10 comments:

Nina said...

...the IgNobel's put a note :


http://improbable.com/2009/01/01/us-military-build-intelligently-artificial-parents/

Anonymous said...

Did the proposal call for an "avatar"? My read of it is that they wanted video footage, or a high-res 3D rendering, in which case it seems that a child should have no more trouble understanding that it is their mother/father than they would if it was a live video feed... Just my $0.02...

HumanProject said...

Anonymous,
This is an excellent point. You're right, they wanted video or high-res 3D. I confess I'm a little pre-occupied with avatars since that's the direction my research team is going in designing virtual reality environment for foreign language learning.

So let me rephrase the issue. As you say, the child will see the high-res image and recognize their parent. But developmental psychologists just don't know how really young children -- age 3 to 5 -- conceptualize interacting with video footage of a parent, both when it is a regular video stream, but especially, what does the child think when the offers "contingent responding" -- answers questions and says familiar phrases?

What children understand of video is unclear. The data from studies of TV suggest that at age 3 to 5, children think TV images are like a magic window on reality, as if every TV show was a live feed and those events are really happening. It isn't until ages 8-9 that children start to understand the whole idea of acting, rehearsal, stage directions, scripts, etc. We can consider the classic studies of TV understanding (which didn't change from 1970s-80s -- I've always wanted to see how those hold up now). When shown a TV image of pop-corn, young children say the pop-corn will spill if the TV is turned upside down. Now, whether they really think it will spill is unclear, but the fact that they say they it will spill (while 5 years old laugh at the idea of the pop-corn spilling) suggests that by age 5, but not before, kids finally understand how objects on TV differ from physical objects in the room.

How 3-5 year-olds understand home-video footage of family events when parent is or is not present is not something I've read about, but anecdotally, it doesn't seem to cause any problems, that's just a moving picture of a parent, like a photo. But how would 3 year-olds react to high-res rendering with contingent responding? Why is Daddy only on the screen? Mommy said Daddy was far away (and so on).

What do you think?

Anonymous said...

Professor,

I should preface my remarks by saying that this is not at all my area of expertise, so I will have to defer to your expert opinion on these matters.

With that said, it seems to me that if we are not worried about the psychological effects of subjecting young children to live video interactions with a parent (e.g. via Skype), then we should not be concerned about subjecting them to realistic, but artificial, interactions. Assuming of course that the interactions are of sufficient quality that they *appear* (to the mind of a young child) to be identical to the live interactions. I realize that that is almost certainly a HUGE assumption since current technology does not seem to be sufficiently advanced to pull this off. However, my guess is that the technology may not need to be as advanced as one might think because it is probably easier to fool a young child than an adult. So, perhaps an issue that should be addressed is: How close to "perfect" does the automated assistant have to appear in order to "fool" a child and thus cause no more harm than a live interaction?

I'm sure I am missing some crucial psychological issue here, so please excuse my naive ramblings.

Anonymous said...

Anonymous,

Even if the military could "fool" children, should they?

Professor Caldwell-Harris has already started asking the common sense questions these researchers should have asked before hatching this hare-brained scheme. How would participating in this deception make the spouses feel? How would it make the kids feel later when they find out they were saying "I love you" to a bot? How would it make soldiers feel when the army says "we've created a robot to tuck your kids in at night so you don't have to"?

I keep seeing ideas like this coming out of both military and academic research. A recent job applicant I interviewed described a project he had worked on at Carnegie Melon. Essentially it was Problem: old people are lonely. Solution: give them robots so they will need even less human contact.

The need for human contact is not a problem to be "solved". It is core to our humanity. It should be celebrated and encouraged. And, when missing, it should be mourned and honestly dealt with.

John Cartan

HumanProject said...

Re: comment above about how an AI parent may be no more damaging than skype:

What do you think about this:

A high-res rendering of a person who can hold a basic conversation is just the next technological level up from what people have always used in the past to stimulate their memories. First, artist paintings, then photo, then video, now AI combines some familiar phrases and NLP. No one worried that a family video was suppose to replace parental gatherings. So in this case, the AI parent is just another (albeit very advanced) tool to facilitate the one's own memory processes, just extending the role of photos.

It also seems that the military's project could be very useful as a basic science topic for doctoral students. It'd be a nice way to get a developmental psychology grad student talking with a computer science /AI grad student: they'd share expertise and the point would be to learn more about how children might be able to conceive of virtual persons and it would be interesting for the AI grad student to think about natural language understanding in the context of specific (and narrowed down) communicative goals, thus making the Turing Test more manageable.

HumanProject said...

John Cartan,

Thanks for bringing up the "robots for the elderly" project -- very pertinent.

You write "The need for human contact is not a problem to be "solved"

Welcome to capitalism. More specifically, the problem to be solved is: "How can we make money off of the human need for contact?"

Anonymous said...

Professor,

I think you make an excellent point with your characterization of this technology as being the next step in the evolution of the "tools" that are used to facilitate memory processes. In fact, the original proposal states, "Historically, families have derived comfort and support from photographs or mementos, but current technology SHOULD allow for more personal interactive messages of support."

As an aside, I thought it was quite responsible of the DOD to require that anyone wishing to submit a proposal MUST include the use of "trained psychological health and family advocacy experts". By requiring proposers to consult with experts, such as yourself, the DOD appears to be demonstrating their desire that this technology be developed "responsibly".

Finally, the interdisciplinary doctoral research you suggested, sounds like a brilliant idea! So many interesting possibilities there!

Thank you for your very interesting thoughts!

Anonymous said...

Dear "HumanProject",

Yes, I am familiar with the concept of capitalism. And I can tolerate crass motives if they lead to good results.

I guess what bothers me in this case is what I see as a disconnect between real human suffering and self-satisfying academic research justified on the basis that it will help alleviate that suffering.

In their zeal to wrangle with technological challenges and come up with killer demos, researchers too often betray a startling lack of understanding or even real interest in the people they are supposedly trying to help.

As a parent, my heart goes out to families torn asunder by war. There are no easy answers to the loneliness and confusion of innocent children separated from a parent (or a forgotten old person who is simply misses the touch of a human hand). My deeper point is that sometimes there *shouldn't be*. Suffering can lead to spiritual growth, and, among other things, a reluctance to start new wars.

In my experience, if you really want to help fellow human beings in pain, the single most important thing in is to deal with them honestly. That's why I found the goal of successful deception particularly inappropriate in this case.

Of course, as my wife immediately pointed out, real kids are tech savvy and may often be quicker to see through this perverse Turing Test than the grownups selling it. And within nanoseconds of seeing through it, they would begin to use that knowledge to their advantage. I can easily imagine a conversation like this:

Child: "I want to play video games but Mommy says I can't. If you think it's OK, say 'I love you, son'."
Robot: "I love you, son."

Like you, I really wanted to develop AIs when I was a teenager (back in the 70s). I value research and the people who do it. I just wish we would all do a better job of understanding the human head (and heart!) before we try to recreate it.

John Cartan

HumanProject said...

John,
Many great points in your comment. The idea that kids would learn to manipulate the technology is a good insight. Thanks for posting a comment.