Technology advances at an unprecedented rate. New products and innovations are released daily, and over the years the use of artificial intelligence and machine learning, notably in robotics, has become ever more prevalent and increasingly intelligent.
These advancements are generally created to aid in the betterment of society and the environment, whether that is robots helping children to speak out about problems, drones getting urgent medication quickly delivered or bomb-scouting robotic dogs.
But sometimes the technology we use to help can also be used to harm.
Lethal autonomous weapons (LAWs) systems, or ‘killer bots’, are independent robotic systems that can search for and engage a target based on pre-programmed information. Using artificial intelligence, it is possible for them to identify, select and kill targets without the need for human intervention. Algorithms alone determine whether a LAWs will terminate a target.
The arguments:
Bots to harm
A policy draft proposed that the use of deadly force, by means of robots, be granted to the San Francisco Police Department (SFPD) when responding to incidents where the risk of loss of life to members of the public or officers is imminent and outweighs any other force option available to the department.
This proposal was approved by the San Francisco Board of Supervisors via majority vote at the end of 2022.
The robots are armed with explosives and designed to terminate human life – if the circumstances are extreme and all other attempts to save or prevent loss of innocent lives have been carried out – the kill order, unlike the LAWs autonomous robots, is issued via human intervention.
However, it is not the police officers themselves who issue the kill order, they would simply have access to request their use. The authority to deploy would then lie with a few select and high-ranking officers.
Bots not to harm
Much like the gun laws in the US, killer bots were met with two very opposing views.
Dr Catherine Connolly from the Stop Killer Robots group opposed the decision, speaking via the BBC to note that the decision is a “slippery slope” and a step towards distancing humans from the realities of killing.
And whilst it was a majority vote that saw the motion carried forward, some members of the Board itself opposed the use of lethal bots.
The universal voice of opposition seems to chant the bots will not only desensitise humans from killing, but that further militarisation of the police force is a dangerous weapon in itself, especially for those from marginalised communities.
The reversal
The public criticism including that from civil rights groups and other police groups, ultimately, led to the Board reversing their decision just one week later – with some board members noting that upon reflection, the original decision approving the use of the robots did not sit well with them.
Despite the reversal, it has been noted that the Board sent the issue back to committee for further review – which means it could still decide to allow the use of deadly force, in certain circumstances, at a later date. However, the public rebuttal is expected to be just as considerable.
The future of killer bots
Though there are two very opposing views, and reasons behind those views, of the uses for and against using lethal robots, it must be noted that, at least for the foreseeable future, they will never be fully automated. There will always need to be an element of human interaction, whether that is building and programming the machine through to issuing a kill order. Which means someone will always be held accountable for their actions.
With robotics advancing every day, it raises the question that if tiny drones or mechanical dogs can be operated remotely and those machines can inform humans of a situation allowing them to neutralise a situation – do we really need killer robots at all?
However, technology is a powerful tool, and it is only becoming more advanced. If the use of lethal robots does become an accepted norm, then the foundations of dehumanising a situation becomes suggestive of a desensitised nation, and once humans are reduced to objects it could become indicative of a dystopian society.
For more the latest industry and smart tech, check out IoT Insider’s smart homes, smart cities and news pages or our sister site, Electronic Specifier which has a dedicated Robotics section! Or you can visit our LinkedIn page.