Jeff Parsons – Metro June 8, 2020
The American air force is planning to match up an autonomous combat drone against a human pilot to find out who’s the best of the best. The aerial match-up is scheduled for 2021 and will potentially pave the way for unmanned combat fighter jets flown by artificial intelligence (AI) in the future.
According to Lt Gen Jack Shanahan, the head of the Pentagon’s Joint Artificial Intelligence Center, it’s a ‘bold, bold idea’.
Lt Gen Shanahan said he exchanged emails with the team leader on the human vs machine dogfight last weekend. He explained the US Air Force Research Laboratory (AFRL) will attempt to field ‘an autonomous system to go up against a human, manned system in some sort of air-to-air.’
He went on to say that while the use of AI is limited at the moment, in the future humans and machines would work together to make a ‘big difference’.
Steve Rogers, the team leader for AFRL project told Inside Defence: ‘Our human pilots, the really good ones, have a couple thousand hours of experience.’
‘What happens if I can augment their ability with a system that can have literally millions of hours of training time? … How can I make myself a tactical autopilot so in an air-to-air fight, this system could help make decisions on a timeline that humans can’t even begin to think about?’
Shanahan said that ‘legacy systems’ aren’t going away any time soon but the decision to pit man against machine highlights the US drive towards more autonomous weaponry.
Last year, details came to light about a Google employee that left the tech giant after being asked to work on artificially intelligent military weaponry.
She warned it would start World War 3.
Laura Nolan has since started arguing that AI weapons should be outlawed the same way chemical weapons are.
Nolan is a vocal member of the Campaign to Stop Killer Robots which talks to the UN in New York and Geneva about this possibility of a war being started by drones.
‘There could be large-scale accidents because these things will start to behave in unexpected ways,’ she told the Guardian.
‘Which is why any advanced weapons systems should be subject to meaningful human control, otherwise they have to be banned because they are far too unpredictable and dangerous.’
The project Nolan worked on at Google – known as Project Maven – was shelved after over 3,000 employees signed a petition against it. The idea behind it was that a trained artificial intelligence could cycle through hours of drone video content and quickly differentiate between humans and objects. In the future, it could make UAVs even more accurate when searching for a target.
After resigning, Nolan has predicted that the demand for autonomous weapons will grow with fatal consequences.
‘You could have a scenario where autonomous weapons that have been sent out to do a job confront unexpected radar signals in an area they are searching; there could be weather that was not factored into its software or they come across a group of armed men who appear to be insurgent enemies but in fact are out with guns hunting for food,’ she said.
‘The machine doesn’t have the discernment or common sense that the human touch has.’