In my previous articles I have explored the positive benefits of Robots in our society, ranging from farming, social care and more recently the use of robotic equipment in the fight against Covid-19. However, there is a darker more sinister side to the robotics industry and in this article I will explore some important ethical issues for developing autonomous robots for military use.
Robots have been used in the military for many years. In 2018 the UK government purchased 56 new Harris T7s bomb disposal robots. These robots are called unmanned ground vehicles (UGV) and are equipped with high-definition cameras, lightning-fast datalinks, an adjustable manipulation arm, and tough all-terrain treads, allowing them to neutralise a wide range of explosive threats.
Harris T7s bomb disposal robot
The use of ‘advanced haptic feedback’ allows operators to ‘feel’ their way through the intricate process of disarming from a safe distance. The use of this type of robot is very positive as it prevents harm to innocent civilians and protects soldiers when they are making explosives safe and from threats such as roadside bombs. The military also uses semi-autonomous drones. These are robots that are remote controlled, so there is a human in the loop who will fly them down, find a target and decide when to use lethal force.
However, several countries including US, China, Israel, South Korea, Russia, and the UK are developing autonomous weapon systems, where robot weapons can go out on their own, find their own targets, and kill them without any human supervision. Professor Noel Sharkey from The University of Sheffield states that fully autonomous robots raise important ethical issues such as how a machine will decide when lethal force is appropriate. This is concerning because it is unclear how this will be solved and some experts believe that this could lead to a robotic arms race if left unchallenged.
Several organisations have been campaigning against the use of robots as autonomous weapons. The Foundation for Responsible Robotics is organised by over 20 of the world’s leading tech scholars, writers and roboticists and is now growing rapidly with many new members and partners. The Responsible Robotics organisation states that:
“Robots are tools with no moral intelligence, which means it’s up to us – the humans behind the robots – to be accountable for the ethical developments that necessarily come with technological innovation. Addressing ethical issues in robotics and AI means proactively taking stock of the impact these innovations will have on societal values like safety, security, privacy, and well-being, rather than trying to contain the effects of robots after their introduction into society.”
The Campaign to Stop Killer Robots is urging states to form an international treaty to ban the development, production, and use of fully autonomous weapons. In 2019 they brought their robot campaigner to the United Nations General Assembly in New York, calling for a ban on killer robots. Here are the highlights https://www.youtube.com/watch?v=XtvkJYkVRyg. To learn more about the threat of fully autonomous weapons watch this eerie video https://www.stopkillerrobots.org/learn/ which gives us an idea of what might happen if fully autonomous weapons replace troops.
Whilst robotic technology continues to be developed for all the right reasons in areas such as medicine, the care sector and agriculture, the use of robots as lethal weapons should be a huge concern for all of us. Many of these autonomous weapon systems are currently being developed and it’s our responsibility to make sure that humans stay in control.
What are your thoughts? Will the use of cyber robots actually save lives or will it make the prospect of armed conflict more likely? Perhaps you’re thinking of a career in the military or IT – is this what you expected it to be like? We’d love to hear from you so please get in touch below.
Building a Future with Robots – The University of Sheffield. Future Learn MOOC