Oh... boy....
http://ngm.nationalgeographic.com/2011/0...rroll-text
http://ngm.nationalgeographic.com/2011/0...rroll-text
Quote:The Actroid androids are part of a new generation of robots, artificial beings designed to function not as programmed industrial machines but as increasingly autonomous agents capable of taking on roles in our homes, schools, and offices previously carried out only by humans. The foot soldiers of this vanguard are the Roomba vacuums that scuttle about cleaning our carpets and the cuddly electronic pets that sit up and roll over on command but never make a mess on the rug. More sophisticated bots may soon be available that cook for us, fold the laundry, even babysit our children or tend to our elderly parents, while we watch and assist from a computer miles away.
“In five or ten years robots will routinely be functioning in human environments,” says Reid Simmons, a professor of robotics at Carnegie Mellon.
[...]
HERB is a homely contraption, with Segway wheels for legs and a hodgepodge of computers for a body. But unlike pretty Yume, HERB has something akin to a mental life. Right now the robot is improving its functionality by running through alternative scenarios to manipulate representations of objects stored in its memory, tens of thousands of scenarios a second.
“I call it dreaming,” says Siddhartha Srinivasa, HERB’s builder and a professor at the Robotics Institute at Carnegie Mellon. “It helps people intuitively understand that the robot is actually visualizing itself doing something.”
[...]
To negotiate human spaces, robots like HERB need to perceive and cope with unfamiliar objects and move about without bumping into people who are themselves in motion. HERB’s perception system consists of a video camera and a laser navigation device mounted on a boom above his mechanical arm. (“We tend to think of HERB as a he,” Srinivasa says. “Maybe because most butlers are. And he’s kind of beefy.”) In contrast to a hydraulic industrial robotic armature, HERB’s arm is animated by a pressure-sensing system of cables akin to human tendons: a necessity if one wants a robot capable of supporting an elderly widow on her way to the bathroom without catapulting her through the door.
[...]
Picking up a drink is dead simple for people, whose brains have evolved over millions of years to coordinate exactly such tasks. It’s also a snap for an industrial robot programmed for that specific action. The difference between a social robot like HERB and a conventional factory bot is that he knows that the object is a juice box and not a teacup or a glass of milk, which he would have to handle differently. How he understands this involves a great deal of mathematics and computer science, but it boils down to “taking in information and processing it intelligently in the context of everything he already knows about what his world looks like,” Srinivasa explains.
[...]
The researcher who has gone the furthest in designing ethical robots is Ronald Arkin of the Georgia Institute of Technology in Atlanta. Arkin says it isn’t the ethical limitations of robots in battle that inspire his work but the ethical limitations of human beings. He cites two incidents in Iraq, one in which U.S. helicopter pilots allegedly finished off wounded combatants, and another in which ambushed marines in the city of Haditha killed civilians. Influenced perhaps by fear or anger, the marines may have “shot first and asked questions later, and women and children died as a result,” he says.
In the tumult of battle, robots wouldn’t be affected by volatile emotions. Consequently they’d be less likely to make mistakes under fire, Arkin believes, and less likely to strike at noncombatants. In short, they might make better ethical decisions than people.
In Arkin’s system a robot trying to determine whether or not to fire would be guided by an “ethical governor” built into its software. When a robot locked onto a target, the governor would check a set of preprogrammed constraints based on the rules of engagement and the laws of war. An enemy tank in a large field, for instance, would quite likely get the go-ahead; a funeral at a cemetery attended by armed enemy combatants would be off-limits as a violation of the rules of engagement.
[...]
Back at Carnegie Mellon it’s the final week of the spring semester, and I have returned to watch the Yume Project team unveil its transformed android to the Entertainment Technology Center’s faculty. It’s been a bumpy ride from realism to believability. Yan Lin, the team’s computer programmer, has devised a user-friendly software interface to more fluidly control Yume’s motions. But an attempt to endow the fembot with the ability to detect faces and make more realistic eye contact has been only half successful. First her eyes latch onto mine, then her head swings around in a mechanical two-step. To help obscure her herky-jerky movements and rickety eye contact, the team has imagined a character for Yume that would be inclined to act that way, with a costume to match—a young girl, according to the project’s blog, “slightly goth, slightly punk, all about getting your attention from across the room.”
That she certainly does. But in spite of her hip outfit—including the long fingerless gloves designed to hide her zombie-stiff hands and the dark lipstick that covers up her inability to ever quite close her mouth—underneath, she’s the same old Actroid-DER. At least now she knows her place. The team has learned the power of lowering expectations and given Yume a new spiel.
“I’m not human!” she confesses. “I’ll never be exactly like you. That isn’t so bad. Actually, I like being an android.” Impressed with her progress, the faculty gives the Yume team an A.