Artificial people are not a new thing. To be clear, I am not talking about non-human legal entities, nor do I mean the sort of folk you meet at cocktail parties who can talk for hours without saying anything. Here I refer only to people whose creators are other technically skilled people.
This notion of artificial people goes back as far as recorded history, with mythical “beings” dating to circa 400 BC. The famous notebooks of Leonardo da Vinci contained sketches of humanoid mechanisms dated 1495. It’s unclear if he ever tried to build any of them, but in his day when scientists often ran afoul of the Church, doing so openly would not have been wise.
More recently, Disney has dabbled in this stuff as well. I remember as a teenager on one of my first visits to Disneyland, there was much buildup and excitement leading to “Great Moments with Mr. Lincoln”. After all the fanfare, a somewhat stilted life-sized likeness of Honest Abe spoke briefly in a small, dimly lit theatre. Another early Disney entry into artificial creatures was the Enchanted Tiki Room, where to this day the artificial birds continue to annoy unwitting crowds.
Software has taken on an increasing role, graduating from sequentially operating actuators and playing recorded messages to making simple decisions. Back when home computers were first becoming popular, one of the more intriguing programs was called “Eliza”, billed as a computer psychiatrist. A rudimentary predecessor to Siri, Eliza would parse your typed input and mimic the responses of a couch-doc.
Eliza: Hello, I am Eliza
Paul: Hello, I am Paul
Eliza: Did you come here because you are Paul?
Paul: Why, is there something wrong with being Paul?
Eliza: What answer would please you the most?
Paul: That being Paul is OK.
Eliza: I see.
Astute readers will note the uncanny resemblance to a real shrink.
The convenient term used by most of us for machines which do the work of people was adapted from the Czech word for work, “rozhÝbat”. The play that introduced “robot” to the world in 1920 was actually about artificial humans who toil away at tedious, low-level jobs. In the end, they revolt and kill off the human race. This may explain why the Enchanted Tiki Room makes people anxious.
What happens when the ever-more sophisticated electromechanical humanoids are paired with increasingly refined computer programs? The thing that makes most of us uncomfortable is not just the idea of an automated machine, but the marriage of such a device with an AI capable computer. Such artificial progeny have the potential to go off-script, and no one is sure what happens next.
Take warfare for example. The idea of remotely piloted drones attacking enemy targets with minimal risk to US troops has widespread appeal. Adding software to help recognize targets and make firing “suggestions” seems like an obvious enhancement. The leap from human-monitored to completely autonomous lethal weapons guided by artificial intelligence is shorter than we might imagine.
Phalanx is a computer-controlled gun system that can fire at incoming threats without human intervention. It is currently installed on many US Navy vessels. Future wars could ultimately be fought between opposing groups of engineers who may not even know a war is underway.
Then there is medicine. Most of us have allowed a machine in a drugstore to squeeze an arm and take an automated blood pressure reading. This level of faith is now routine, but how about trusting one to take a blood sample? Personally, I’m not ready to offer my arm to a rookie robo-phlebotomist.
IBM’s Watson is used in some medical settings to diagnose and suggest treatments for various conditions, but so far human doctors still have the last word. Robots are now being employed in the tedious, follicle by follicle transplantation of hair, but using them to remove a brain tumor, under the direction of Dr. Watson, would be a very different story.
Perhaps the most imminent and impactful development for the majority of us is the emergence of autonomous cars. Well over a million people die in car crashes each year, and the majority of these are the result of driver error. It is not hard to imagine that a machine could do better.
A good part of driving, however, involves distributing the risk between all who are on the road, regardless of their means of transportation. An important assumption baked into motor vehicle operation is that a person of good judgment will know when to disregard the letter of the law to honor the spirit of the law. Would your car drive you off a bridge into a lake to avoid running into a crowd of schoolchildren? These are not idle questions; Ford promises to have driverless cars in five years, and Uber is currently using self-driving taxis (as of now with a human co-pilot) in Pittsburgh.
Why do robots with AI still make us uneasy? We routinely fly on planes with autopilot, and the cars we spend so much time in have gradually added cruise control, automatic braking, and fully automated parallel parking. Robot-assisted surgery has been around for decades, and AI-guided weapons are becoming commonplace on US warships. Psychologists, sociologists, anthropologists, linguists and others are working to understand more about all the roles we play in life, and what makes us trustworthy in each.
One problem with figuring out the right thing to do is that cultures, customs and laws vary so much around the world. An autonomous car might be programmed to survey its surroundings if it detects a horn, but it’s little AI brain would probably overheat in Spain, where this sound means nothing more than “Look at me, I have a horn!” In New York City, the horn would undoubtedly be accompanied by a single raised finger, but it’s important to recognize which finger (or thumb) it is.
Eliza is only as good (or bad) as the computer code behind the curtain. The same is true for your Tesla Model S. How do you feel about being driven around by an Uber driver you’ve just met, versus an engineer you will never know? Who do you trust to do the right thing?
Eliza: Hello, I am Eliza
Eliza: How are you today... What would you like to discuss?
Paul: Can I trust you?
Eliza: Perhaps you don’t want to trust me?
Paul: I see
Author Profile - Paul W. Smith, a Founder and Director of Engineering with INVENtPM LLC, has more than 35 years of experience in research and advanced product development.
Prior to founding INVENtPM, Dr. Smith spent 10 years with Seagate Technology in Longmont, Colorado. At Seagate, he was primarily responsible for evaluating new data storage technologies under development throughout the company, and utilizing six-sigma processes to stage them for implementation in early engineering models. He is a former Adjunct Professor of Mechanical Engineering at the Colorado School of Mines, and currently manages the website “Technology for the Journey”.
Paul holds a doctorate in Applied Mechanics from the California Institute of Technology, as well as Bachelor’s and Master’s Degrees in Mechanical Engineering from the University of California, Santa Barbara.