[ad_1]
In the demilitarised zone on the Korean peninsula, a deadly robot sentry stands guard. It can detect humans from two miles away, using heat and motion sensors, and is equipped with a machinegun.
Along the besieged Gaza border fence, an unmanned ground vehicle has patrolled since 2014. It carries a 360-degree camera and loudspeaker. Later models are mounted with a weapon.
And in Afghanistan, a sophisticated weapons defence system protected German bases. It locked on incoming targets three kilometres away, firing six 35mm automatic guns at 1000 rounds per minute.
The robot wars are here.
READ MORE:
* Killer robots – with ethics – are being developed by Australia Defence Force
* Waikato University researcher examines whether a robot can become legal person
* Elon Musk in plea to UN to stop killer robots – but what are they?
Land and sea mines, the first examples of automatically triggered weapons, have been used since the 17th century. Automated defence systems that can destroy advancing missiles have been protecting warships since the 1970s.
Armed aerial drones were first developed in the late 1980s. This was the start of a technical revolution that is taking human soldiers further and further away from their targets.
These robotic weapons search and identify targets. They are automatic, but not autonomous.
For now, a human operator pulls the trigger, taking a life.
But campaigners and politicians are asking how long before we are taken out of the loop?
New Zealander Mary Wareham is the global co-ordinator for the Campaign to Ban Killer Robots.
“We are talking about the next generation of drones. At the moment the drones used are still controlled by a human operator who is back in a base in Nevada or somewhere else,” she says.
“There are a huge amount of problems with the use of armed drones – a great void of accountability and justice for the civilian victims harmed or killed. And we can only see that getting worse once you have fully autonomous weapons.”
It sounds like the stuff of science fiction movies, conjuring up an image of a red-eyed Arnold Schwarzenegger cyborg assassin sent back in time.
But facial recognition technology and algorithms are already here. If cars can drive themselves, how soon before we upgrade to an army of bots?
“There are multiple different forms out there: drones are probably the most visible one,” says Wareham. “To try and categorise them into one type of definition would be an impossible task.
“But if you try and understand what is the nature or measure of human control in the selection and identification of targets, the critical functions of the weapons system – that is the definition of a killer robot. If it lacks meaningful human control over those critical functions: it is a killer robot.
“There is pretty much wide agreement that we are talking about future weapons systems and not about prohibiting the ones that exist today.
“But every day it takes for governments to dither over regulation … we do get closer to the deployment of a fully autonomous weapon.”
Modern warfare has become fast-paced and conducted at a vast distance. The US believes Russia is working on an autonomous nuclear torpedo and artificially intelligent missiles.
Israel confirmed two years ago that it is building tiny military robots. Britain is developing drone swarms and China is reported to be developing autonomous submarines, ready for deployment early in the next decade.
“The terminators are decades away, but we have got much more fundamental questions about the more rudimentary versions of autonomous weapons that are coming,” Wareham says.
“What if your communications links fall down and you can no longer communicate with the weapons system? What do you program it to do: return to base? Complete its mission and deliver the weapons without any human control?”
She might not have Sarah Connor’s skin-tight muscle vests or aviator sunglasses. But it’s not the first time Wareham has taken on the military industrial complex and won.
In the 1990s, she worked on the International Campaign to Ban Landmines, which succeeded in getting an international treaty banning anti-personnel landmines – and a Nobel Peace Prize.
Now with Human Rights Watch, she documents the use of landmines, cluster munitions, chemical weapons and incendiary weapons in Syria, Libya and Yemen.
The campaign against killer ‘bots began in 2012 and reunited Wareham with landmines campaigner Jody Williams.
They have enlisted 100 non-governmental organisations in 53 countries and raised $1 million in the past year.
“We are not going to be able to stop a rogue state or roboticist from creating or improvising their own killer robot. What we are trying to do is prevent the mass-produced, factory-made development of fully autonomous weapons systems, by the thousands,” she says. “And to set up the stigma, and the norm, against them.
“That is what we have done with landmines – 163 countries are now part of that treaty, and more than 50 million anti-personnel landmines have been destroyed from stockpiles under that treaty.
“Thousands of lives and limbs have been saved because of it. That’s why we think it is possible.”
The aim is strict legal regulation – or ideally a ban in the form of a treaty or protocol, like those which already exist for landmines and incendiary devices.
Countries aren’t compelled to sign up, but the stigma makes them more judicious in the weapons they deploy.
Powerful voices – like the late Stephen Hawking and the Vatican – have raised concerns. Last year, the European Parliament passed a resolution calling for an international ban.
In 2015, and again in 2017, more than 100 artificial intelligence pioneers, including Tesla’s Elon Musk, joined forces to push the United Nations into formal talks to ban the development and use of killer robots.
Musk has called it humanity’s biggest existential threat. Tech giant Google announced it will not design or deploy AI (artificial intelligence) related to “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”.
Wareham says AI and robotics experts are the “canaries in the coal mine”.
“There has been a lot of obfuscation out there from defence contractors and think tanks saying that we are trying to … prohibit the whole field of artificial intelligence. That’s nonsense.
“And that is disrespectful to the AI experts who have supported the campaign .. .they feel like the chemists back last century who were trying to prevent the development of chemical weapons, or the nuclear physicists who were trying to prevent the development of nuclear weapons.”
In their letter to the United Nations, the specialists wrote: “Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale great than ever, and at time scales faster than humans can comprehend.
“These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.
“We do not have long to act. Once this pandora’s box is opened, it will be hard to close.”
UN secretary-general António Guterres got on board, describing lethally empowered machines as “politically unacceptable and morally repugnant”.
But talks, under the UN’s Convention on Certain Conventional Weapons, last year, and again in March in Geneva, could not reach consensus.
Almost 50 states referenced the need for control in their UN General Assembly statements. Nearly 30 countries have explicitly called for a ban and Chile, Brazil and Austria have begun work on a treaty.
But a group of states – including the US, Britain, Australia, Israel and Russia – does not support prohibition. New Zealand is among them.
“Our view is that international law already sets limits on Lethal Autonomous Weapons (Laws), notably through Additional Protocol 1 to the Geneva Conventions,” Foreign Minister Winston Peters said in a letter to the campaign in March.
“An autonomous weapon that could not fulfil the requirements of proportionality, distinction and precautions in an attack would not be lawful. Our priority is to ensure that current international law is implemented more effectively to ensure there will always be meaningful human control over weapons incorporating autonomy.
“New Zealand has lent its support to calls for a high-level political declaration that would set out key disciplines on the development and use of Laws, such as meaningful human control and the operational safeguards needed to ensure them.
“The political declaration we seek would not exclude progress on other options over time, but at this stage it is clear that major countries capable of developing Laws would not join a ban. We support a forward work programme that ensures that discussions can continue among a wide grouping of states on key technical, policy and legal issues relating to Laws, including issues involved in a possible ban.”
Wareham is deeply disappointed in what she says is an “inadequate response”.
And she’ll take her concerns to Parliament’s foreign affairs select committee on Thursday. It’s part of what she calls “the legwork in capitals” to build support.
“All [the Government] can say is that existing law applies to all weapons systems and if killer robots can pass a legal review and be used lawfully then there isn’t a problem with that.
“I have got a real problem with that … it is very narrow-minded. It doesn’t reflect previous disarmament work … it is the same position of the US and Russia: that existing law is fine: it is all going to be fine. That’s our position after six, seven years on killer robots? That’s it?
“I don’t want New Zealand to be left behind, sitting at the back of the room saying existing law is fine.”
In March, a group of New Zealand academics also voiced their concerns about “an unprecedented threat to humanity” and in a letter urged the Government to support a ban, pointing to New Zealand’s role in negotiations for the 2008 Convention on Cluster Munitions and the 2017 Treaty on the Prohibition of Nuclear Weapons.
Wareham wishes Prime Minister Jacinda Ardern would lend her international kudos to the campaign.
“We talk about our anti-nuclear credentials but you don’t get to rest on your laurels … I’m sure there is a long list of groups hoping that our prime minister will take up their cause and you can put the Campaign to Stop Killer Robots on that list.
“We are not satisfied with what we’ve been told by the foreign minister. This does require some proactive leadership, she is obviously willing to do that with France and the tech-sector concerns.
“We have got good leadership, they just need to acknowledge this is a serious concern and something needs to be done about it.”
[ad_2]
Source link
Leave A Comment
You must be logged in to post a comment.