• 3 hours The Fastest Growing Energy Sectors Of 2019
  • 23 hours How To Spy On Yourself: The Doorbell To End Civil Liberties
  • 2 days Analyst Predicts Tesla Stock Will Soar To $500
  • 3 days Australian Billionaire To Invest In $88 Million Struggling Solar Project
  • 4 days Twitter-Shaming: The Biggest Threat To Any Business
  • 4 days Canada Looks To Become A Major Source For Critical Minerals
  • 4 days Hedge Funds Are Piling Into This Key Commodity
  • 6 days Trade Deal Not Likely Before Christmas 2020
  • 6 days America's $16 Trillion Debt Bubble Is About To Burst
  • 7 days Black Friday Breaks Online Shopping Records
  • 7 days Tesla's Biggest Competitor Is Hiding In Plain Sight
  • 8 days Are Celebrities Good Or Bad For Cannabis Stocks?
  • 9 days Venezuela’s Crisis Continues As Maduro Spends $5 Billion On Oil Deals
  • 10 days Elon Musk Claims 250,000 Orders For Cybertruck
  • 11 days How To Survive Thanksgiving Politics With Cannabis Gravy
  • 12 days The Fragility Of Monetary Policy
  • 13 days 5 Oligopoly Stock Picks For Your 2020 Portfolio
  • 13 days $7 Trillion In Unfunded U.S. Pensions As Domestic Debt Hits A Record High
  • 14 days Retail Is Alive And Well, But Only For The Rich
  • 14 days New Tech Could Unchain The Solar Revolution
Human Energy: Debunking The Matrix

Human Energy: Debunking The Matrix

Hollywood has always been a…

Why Silicon Valley Is Moving To Toronto

Why Silicon Valley Is Moving To Toronto

Some of Silicon Valley’s biggest…

  1. Home
  2. Tech
  3. Other

Will Artificial Intelligence Replace The Military?

Robots

AI has delivered us to a point in time where we have to start seriously thinking about whether we really want killer robots choosing targets to take out in our battles. Ask anyone on the border of Pakistan and Afghanistan what they think about drone strikes with remote selection of targets in a less-than-discerning manner. But we’ve already gone beyond the impersonal drone strike. We’ve taken that leap into a much more stunning form of weaponized robotics.

Last fall, the EU passed a resolution calling for an " International ban on the development, production and use of weapons that kill without a human deciding to fire".

“The power to decide over life and death should never be taken out of human hands and given to machines,” Reuters cited Bodil Valero, security policy spokesperson for the EU Parliament’s Greens/EFA Group, as saying.

This is about a principle known as Marten’s clause--which states that "the human person remains under the protection of the principles of humanity and the dictates of the public conscience.”  In other words, not under the dictates of robots.

However fantastically well machines operation, they should never be charged with making life-and-death decisions in warfare.

Prior to that, in July last year, 2,400 researchers, including Elon Musk, signed a pledge not to work on robots that can attack without human oversight. That, however, was just paying useless lip service to the public. It was a scream in the tundra.

In reality, no one’s putting the brakes on this: The most powerful countries in the world, including the U.S., China, Russia, Israel--and even South Korea and the United Kingdom--are moving closer to autonomous weapons systems. The armed drone was just the harbinger-the test-run.

Related: The Biggest Problem In The Cashless Revolution

In more innocuous-sounding terms, they are called “lethal autonomous weapon systems (LAWs)--though the acronym is much more ominous when we consider that the LAW is basically going to be given to AI.

Proponents argue that LAWs might cause less “collateral damage”. They also believe that artificial intelligence would be more selective in its strikes than humans.

“Most people don’t understand that these systems offer the opportunity to decide when not to fire, even when commanded by a human if it is deemed unethical,” said Professor Ron Arkin, a roboticist at the Georgia Institute of Technology. According to Arkin, LAWs would be fitted with an “ethical governor” helping to ensure they only strike legitimate targets and avoid ambulances, hospitals, and other off-limits targets.

The reality is that we haven’t even mastered drone strikes or laser-guided bombs. In but one example in August last year, a laser-guided bomb from the Saudi coalition struck a bus full of schoolchildren in Yemen, killed 40.

True, says Arkin, “There is no guarantee it would work under all conditions. But sometimes is better than never.”

But is it?

Related: Africa’s First Unicorn IPO Is Coming To The NYSE

As it turns out, that’s an irrelevant question. When there’s piles of money to be made, and plenty of demand (and everyone else is doing it, so we need to keep pace), killer robots will come, regardless of principles.

DARPA has already announced a new $2 billion investment in "next wave" military AI.

“With AI Next, we are making multiple research investments aimed at transforming computers from specialized tools to partners in problem-solving. Today, machines lack contextual reasoning capabilities, and their training must cover every eventuality, which is not only costly, but ultimately impossible. We want to explore how machines can acquire human-like communication and reasoning capabilities, with the ability to recognize new situations and environments and adapt to them,” according to Agency director Dr. Steven Walker.

As far back as 2017, Russian news agency TASS reported that Russian arms maker Kalashnikov had developed an automated weapon that was able to “identify targets and make decisions.”

The U.S. Marine Corps has already tested a bot with a .50-caliber machine gun, and drone warfare has been a key element of the U.S.’s War on Terror.

But replacing soldiers is a rather giant leap. In a 2013 article published in The Fiscal Times, David Francis cited Department of Defense figures showing that “each soldier in Afghanistan costs the Pentagon roughly $850,000 per year.” At the same time (back then) a TALON robot rover capable of being outfitted with weapons cost around $230,000.

The endgame, though, is exactly that--replacing soldiers.

Earlier this year, Russian state media published a video of the military’s new combat robots, designed to ‘serve’ alongside infantry on the battlefield. They still require plenty of human intervention, but developers are working on replacing that intervention with algorithms.

Basically, that means letting a robot decide whether you’re a terrorist or not. Or whether you’re with the “wrong” terrorist group of the moment.

By Michael Kern for Safehaven.com

More Top Reads From Safehaven.com:

Back to homepage

Leave a comment

Leave a comment