US Army Robots Able to Maneuver and Find Targets Using AI
Recent tests with army robots at the US Army’s Yuma Proving Ground have military officials excited about the potential in their Aided Target Recognition (ATR) technology currently under development.
Current military robots are remote-controlled, they are not able to analyze information and make decisions on their own. This is known as teleoperation. The operator of the robot is kept out of harm’s way so it is good for objectives like explosive removal or scouting on a small scale. The problem is that you need about one human for each robot you send into the field so they are not scalable for larger operations.
To overcome this limitation, the Army is seeking to improve robots’ ability to make their own decisions. They emphasize, though, the final decision on whether to use lethal force will always be made by a human.
Brigadier General Richard Ross Coffman oversees the Robotic Combat Vehicle and Optionally Manned Fighting Vehicle programs. He issued a challenge to the Army’s Artificial Intelligence Task Force at Carnegie Mellon University. He asked for a robot that would move on its own and detect targets without using LIDAR.
LIDAR is a technology that uses low-powered laser beams to scan the surroundings and detect obstacles. Self-driving cars typically use it to stay on the road and avoid other cars. But LIDAR is easily detected which would give away the robot’s position to the enemy.
The robots that were tested at Yuma are part of an experimental program called “Origin.” These robots use cameras to make their decisions. Cameras are a passive instrument so there is no signal from them that the enemy can detect.
But machines are notoriously bad at discerning depth and distance from a two-dimensional camera image. On top of that, the Army wants the robots to determine likely targets from the camera images.
The challenge, then, is to get the robots to distinguish between a friendly tank and an enemy tank or a civilian pickup truck versus a pickup truck with a machine gun mounted in the bed. Then the robot has to determine the location of any threats so that it can determine the proper course of action to neutralize it.
To train the computers to recognize different threats, human analysts have poured through over 3.5 million images and told the computer what object is in the image. For example, they would label a picture of a Russian T-72 tank so that the computer would know what that looks like. In another picture with an American M1 Abrams tank, they would label that.
The breakthrough was in recognizing that the robot didn’t need to carry all 3.5 million pictures. It only needs an algorithm developed from analyzing all of those images. The algorithm is much smaller than the library of millions of images but it allows the robot to quickly analyze an image and recognize a target without sending it back to a central server and without carrying all those images on board.
It also frees the robot from having to send high resolution images back to some central location to be processed by a server or a human analyst. This is important since the networks used in military operations can be jammed by the enemy, interrupted by terrain, or beset by technical glitches.
Another Article From Us: The US Army Gun Trucks in the Vietnam War
And while Coffman believes that it is technically possible for a computer to analyze an image and correctly determine whether it shows an enemy target or something else, he made it clear that the Army intends to always keep a human in the loop.