Unmanned Ariel Vehicles (UAV’s) are remote controlled aircraft equipped with elaborate sensors and weapons that deliver lethal strikes by eyes far away in New Mexico, Nevada or Florida. They are currently being used by the CIA in the Pakistan tribal areas as well as Yemen and East Africa. State of the art stealth drones cased the house where bin Laden was eventually found. The Air Force calls them ‘remotely piloted aircraft’, they are capable of take-off, landing and days of hovering over a target. They can case a spot for weeks at a time in areas off limits to humans. Today, the United States has 15,000 robots fighting on the ground, 7,000 in the air. The US Army uses a mechanical warrior to find and disarm roadside bombs, survey the battlefield or shoot down incoming artillery shells. Names like Big Dog and BEAR are vaguely humanoid machines on caterpillar tracks that lift and transport loads up to 500 lbs.
As of this writing, the US Navy is creating robotic submersibles and robot jet ski’s that submerge beneath ships underwater to perform surveillance. The CIA in conjunction with the Army has created SWORDS, miniature systems of robotics that can fit on a pencil eraser. We are a few years off when such miniature machines can create ‘hives’ that converge and operate independently of human interference.
Peter Bergen and Katherine Tiedmann of the New America Foundation in Washington, D.C. are specialists in detailing the accuracy of Drone attacks. Their research indicates that the UAV campaign can cancel out its benefits when drone strikes only create more terrorism as is the case in Pakistan.
Although military robots are on the way to developing autonomy, targeting is still the exclusive preserve of a human operator. Just how long this will remain the case is easy to anticipate.
Marc Garlasco, a recognized expert on the law of war at Human Rights Watch has already discerned that any violation of human rights must contain ‘intention’ which is not possible for a robot. Although military robots may soon develop sufficient autonomy, they cannot develop motivation or desire, leaving it outside the constraints of moral accountability that true autonomous systems have.
Just how do we bridge this divide?
Pet law might be a useful resource in figuring out how to assess the accountability of autonomous systems.
As for the craft of statesmen to divine a way out of intractable conflicts, there is no robotics for that.