Wednesday, March 3, 2021

Israel Keeps the Fight outside the Temple Walls: Revelation 11


Robo-Snipers, Suicide Drones and Robattle – The Story of Israel’s Defense Industry

With its considerable line-up of ‘robo-snipers’, ‘suicide drones’ and ‘robattle’ battlefield robots, Israel’s defence industry is pushing the envelop of autonomous machines with only token human involvement.

In recent years, the use of autonomous weapons has seen a dramatic increase on modern battlefields – and the proliferation has increased international concern over the ethics governing their use.

Israel has established itself as a pioneer of autonomous weapons, specifically with the Harop ‘Suicide Drone’, Robattle wheeled battlefield robot, and Sentry-Tech automated border control machine gun.

The increasing demand for automated weapons comes amid a global revolution in military affairs (RMA), as nations seek to exploit the advantages of offensive firepower manned by tireless machines without the loss of human life.

Suicide drones, or ‘loitering munitiions as they are technically known, are a hybrid between drones and guided missiles. They are defined by being able to ‘loiter’ in the air for a long period of time, before striking a target entering a pre-defined zone, or waiting for human guidance.

— Long Haired Hippie Rebel ðŸ•‰ (@lbox327)

Euphemistically described as a ‘fire-and-forget’ weapon, the Israeli Aerospace Industries’ Harop autonomously attacks any target meeting previously identified criteria, but includes a ‘man-in-the-loop’ feature that allows a human to technically prevent an attack from taking place.

Given the cutting-edge nature of autonomous weapon platforms, there is little in the way of international law regulating their production or sale.

Demand for autonomous ‘suicide drones’ is at an all-time high after the Azerbaijan-Armenia conflict of 2020, which established a benchmark for the effective use of kamikaze drones against conventional military forces. Throughout the conflict, Azerbaijan made prodigious use of Israeli ‘loitering munitions’ and manned Turkish drones.

With demand, comes opportunity. On February 11, more than Israelis including several former defence officials came under investigation for illegaIly designing, producing and selling ‘suicide drones’ to an unnamed Asian nation.

The Israelis are suspected of national security offenses, breaching arms exports laws, money laundering and other financial offenses,” the Israeli newspaper Haaretz reported.

But for Israeli authorities, the crime wasn’t due to a lack of regulation.

Instead, it was for making personal profit from s technology owned by Israel Aerospace Industries (IAI). In the same week, Israel made three official sales to anonymous Asian nations.

Is the concern real?

Researchers from the Institute for Strategic, Political, Economic and Security Consultancy argue that development on automation is moving so fast its outpacing the laws that could even hope to regulate it.

To this end, they describe a slippery slope where the role of human beings in decision loops is quickly fading away, with the lack of a clearly defined line over what is acceptable and what is immoral.

— 3rdcenturytakis (@3rdcenturytakis)

Take the Israeli Border Control Sentry-Tech turret currently deployed along Gaza’s border. They were designed to prevent Palestinians from leaving the Gaza strip and entering Israeli territory.

Automated ‘Robo-Snipers’ set up along the Gaza border, designed to create “automated kill-zones” at least 1.5 km deep. But they aren’t merely robotic guns. The turrets feature heavy duty 7.62 calibre machine guns tied into a network spanning the entire border. If any turret detects human movement, the entire chain of guns can train their sights and concentrate firepower on the interloper. Some turrets are also able to fire explosive rockets.

With such overlapping fields of fire, even heavily armored vehicles would be quickly eliminated. The effect on a human body would be overwhelming, disproportionately violent, and would leave little in the way of human remains.

To increase its effectiveness, its automation consumes information provided by a larger network of drones and ground sensors spanning a 60 kilometer border.

Rafael, Sentry-Tech’s manufacturer emphasises that a human operator in a hardened bunker still has to make the ultimate decision.

Barbara Opall-Rome, former Defence News bureau chief, reports that the turret was designed as an automated closed-loop system, without the need for human input, speaking to Wired Magazine.

She notes, “until the top brass is completely satisfied with the fidelity of their overlapping sensor network – and until the 19- and 20-year-old soldiers deployed behind computer screens are thoroughly trained in operating the system — approval by a commanding officer will be required before pushing the kill button.”

The chilling testimony suggests a move towards a slow decrease in oversight over lethal autonomous weapons, made possible by a lack of state-enforced regulation, and international norms that have yet to adapt to the risks and possibilities of modern technology.

Moral challenge

Concern over the development of autonomous weapons is not limited to ethicists. In 2015, more than a thousand artificial intelligence researchers and notable public figures such as Stephen Hawking and Elon Musk co-signed an open letter to the United Nations, calling for the ban of autonomous weapons.

Their concerns are many. Vocal critics of automation believe that defence companies are building fully autonomous weapon platforms, with only token add-on human involvement pathways, that can be easily removed.

More critically, it’s nearly impossible to externally distinguish between a kill made with human oversight or machine autonomy, blurring the lines of accountability on the battlefield.

The rapid rise of automated weaponry has far reaching legal, ethical and security implications.

Can automated weapons distinguish between soldiers or civilians, and will military conceptions of acceptable risk and collateral damage be coded into their parameters? Who can be held responsible in the event an autonomous weapon makes a mistake? How do automated machines that seek to optimise kill/death ratios and accuracy account for morality, human rights law, and just cause?

Most importantly, is it ethical to allow a machine to harvest a human life without a conscious human decision to do so? For many, a machine making decisions of life violates the concepts of human dignity, while absolving human decision-makers for the burdens of morality and responsibility.

This article has been adapted from its original source.

No comments:

Post a Comment