• 99 days Could Crypto Overtake Traditional Investment?
  • 104 days Americans Still Quitting Jobs At Record Pace
  • 106 days FinTech Startups Tapping VC Money for ‘Immigrant Banking’
  • 109 days Is The Dollar Too Strong?
  • 109 days Big Tech Disappoints Investors on Earnings Calls
  • 110 days Fear And Celebration On Twitter as Musk Takes The Reins
  • 112 days China Is Quietly Trying To Distance Itself From Russia
  • 112 days Tech and Internet Giants’ Earnings In Focus After Netflix’s Stinker
  • 116 days Crypto Investors Won Big In 2021
  • 116 days The ‘Metaverse’ Economy Could be Worth $13 Trillion By 2030
  • 117 days Food Prices Are Skyrocketing As Putin’s War Persists
  • 119 days Pentagon Resignations Illustrate Our ‘Commercial’ Defense Dilemma
  • 120 days US Banks Shrug off Nearly $15 Billion In Russian Write-Offs
  • 123 days Cannabis Stocks in Holding Pattern Despite Positive Momentum
  • 124 days Is Musk A Bastion Of Free Speech Or Will His Absolutist Stance Backfire?
  • 124 days Two ETFs That Could Hedge Against Extreme Market Volatility
  • 126 days Are NFTs About To Take Over Gaming?
  • 127 days Europe’s Economy Is On The Brink As Putin’s War Escalates
  • 130 days What’s Causing Inflation In The United States?
  • 131 days Intel Joins Russian Exodus as Chip Shortage Digs In
  1. Home
  2. Tech
  3. Other

Will Artificial Intelligence Replace The Military?

Robots

AI has delivered us to a point in time where we have to start seriously thinking about whether we really want killer robots choosing targets to take out in our battles. Ask anyone on the border of Pakistan and Afghanistan what they think about drone strikes with remote selection of targets in a less-than-discerning manner. But we’ve already gone beyond the impersonal drone strike. We’ve taken that leap into a much more stunning form of weaponized robotics.

Last fall, the EU passed a resolution calling for an " International ban on the development, production and use of weapons that kill without a human deciding to fire".

“The power to decide over life and death should never be taken out of human hands and given to machines,” Reuters cited Bodil Valero, security policy spokesperson for the EU Parliament’s Greens/EFA Group, as saying.

This is about a principle known as Marten’s clause--which states that "the human person remains under the protection of the principles of humanity and the dictates of the public conscience.”  In other words, not under the dictates of robots.

However fantastically well machines operation, they should never be charged with making life-and-death decisions in warfare.

Prior to that, in July last year, 2,400 researchers, including Elon Musk, signed a pledge not to work on robots that can attack without human oversight. That, however, was just paying useless lip service to the public. It was a scream in the tundra.

In reality, no one’s putting the brakes on this: The most powerful countries in the world, including the U.S., China, Russia, Israel--and even South Korea and the United Kingdom--are moving closer to autonomous weapons systems. The armed drone was just the harbinger-the test-run.

Related: The Biggest Problem In The Cashless Revolution

In more innocuous-sounding terms, they are called “lethal autonomous weapon systems (LAWs)--though the acronym is much more ominous when we consider that the LAW is basically going to be given to AI.

Proponents argue that LAWs might cause less “collateral damage”. They also believe that artificial intelligence would be more selective in its strikes than humans.

“Most people don’t understand that these systems offer the opportunity to decide when not to fire, even when commanded by a human if it is deemed unethical,” said Professor Ron Arkin, a roboticist at the Georgia Institute of Technology. According to Arkin, LAWs would be fitted with an “ethical governor” helping to ensure they only strike legitimate targets and avoid ambulances, hospitals, and other off-limits targets.

The reality is that we haven’t even mastered drone strikes or laser-guided bombs. In but one example in August last year, a laser-guided bomb from the Saudi coalition struck a bus full of schoolchildren in Yemen, killed 40.

True, says Arkin, “There is no guarantee it would work under all conditions. But sometimes is better than never.”

But is it?

Related: Africa’s First Unicorn IPO Is Coming To The NYSE

As it turns out, that’s an irrelevant question. When there’s piles of money to be made, and plenty of demand (and everyone else is doing it, so we need to keep pace), killer robots will come, regardless of principles.

DARPA has already announced a new $2 billion investment in "next wave" military AI.

“With AI Next, we are making multiple research investments aimed at transforming computers from specialized tools to partners in problem-solving. Today, machines lack contextual reasoning capabilities, and their training must cover every eventuality, which is not only costly, but ultimately impossible. We want to explore how machines can acquire human-like communication and reasoning capabilities, with the ability to recognize new situations and environments and adapt to them,” according to Agency director Dr. Steven Walker.

As far back as 2017, Russian news agency TASS reported that Russian arms maker Kalashnikov had developed an automated weapon that was able to “identify targets and make decisions.”

The U.S. Marine Corps has already tested a bot with a .50-caliber machine gun, and drone warfare has been a key element of the U.S.’s War on Terror.

But replacing soldiers is a rather giant leap. In a 2013 article published in The Fiscal Times, David Francis cited Department of Defense figures showing that “each soldier in Afghanistan costs the Pentagon roughly $850,000 per year.” At the same time (back then) a TALON robot rover capable of being outfitted with weapons cost around $230,000.

The endgame, though, is exactly that--replacing soldiers.

Earlier this year, Russian state media published a video of the military’s new combat robots, designed to ‘serve’ alongside infantry on the battlefield. They still require plenty of human intervention, but developers are working on replacing that intervention with algorithms.

Basically, that means letting a robot decide whether you’re a terrorist or not. Or whether you’re with the “wrong” terrorist group of the moment.

By Michael Kern for Safehaven.com

More Top Reads From Safehaven.com:

Back to homepage

Leave a comment

Leave a comment