Join today and have your say! It’s FREE!

Become a member today, It's free!

We will not release or resell your information to third parties without your permission.
Please Try Again
{{ error }}
By providing my email, I consent to receiving investment related electronic messages from Stockhouse.

or

Sign In

Please Try Again
{{ error }}
Password Hint : {{passwordHint}}
Forgot Password?

or

Please Try Again {{ error }}

Send my password

SUCCESS
An email was sent with password retrieval instructions. Please go to the link in the email message to retrieve your password.

Become a member today, It's free!

We will not release or resell your information to third parties without your permission.

Out-Of-Control Algorithms And Killer Bots: How America's Tech Gurus Are Blaming Smart Machines For Idiotic Mistakes

EFX, NOC, TSLA, META

The car factory that employed a woman whose skull was crushed by a robot that encroached on her workspace blames the machine. The jury, as they say, is still out.

Equifax Inc. (NYSE: EFX) blames its software for exposing the highly sensitive details of the credit company’s 143 million users. The software firm fires back and says it’s a people problem.

Facebook Inc (NASDAQ: FB) blames an ad-targeted demographic meant to appeal to anti-Semitic users on an out-of-control algorithm. Not the human who designed it.

Tesla Inc (NASDAQ: TSLA) blames a dead driver for a crash, someone who was haplessly seduced into believing the hype of the company’s semi-autonomous smart vehicle. The feds blame Tesla.

Related Link: How The US, China And Russia Are Weaponizing AI

Does that make CEO Elon Musk — the world’s leading alarmist on the threat of artificial intelligence autonomy who pressures his people to make deadlines on his rockets and self-driving cars — the ultimate hypocrite of the smart tech era?

You be the judge. Everybody else is. We are entering an era of machine autonomy, and even the lawsuit industry is choosing sides and sizing up how to assess liability when the Internet of Things goes horribly wrong.

Blaming Tech Is The Big New Problem Of The AI Age

As tech gets smarter, the race to unleash AI presents new opportunities for hackers to leave untraceable tracks in new algorithms created by scientists driven to get autonomous systems to market.

And when machines designed to teach themselves make bad decisions, the reasons why often elude human grasp, leading people to blame their creations instead of their creators, says Eugene H. Spafford, Purdue University professor of computer science, philosophy, and founder of the Center for Education and Research in Information Assurance and Security.

The Problem Of Tracking AI Errors

“It’s an area of big concern. Not just cars but UAV (unmanned autonomous vehicles), trucks. We should be concerned about how those things are put together,” Spafford told Benzinga. “If one of these things results in death or injury, who would be responsible? I’m not sure some of the organizations pushing the technology have given it sufficient thought.”

One of the problems with emerging systems, of machine learning in general, is tracing how they came to the decisions that often surprise the scientists that feed them massive amounts of data.

Spafford suggests that autonomous cars — already proven to be eminently hackable — might even elude an expert investigation of whether robotic behavior was even a result of a hack.

Some Scientists Try To Head Skynet Off At The Pass

While some scientists race toward the autonomous world of AI and robotics, some are trying to keep up with the safeguards necessary to keep, say, massive hacking of car fleets from matching the enormous numbers of highway deaths caused by human error, a Darwinian trade-off if ever there was one.

The Defense Advanced Research Project Agency has created the Supply Chain Hardware Integrity for Electronics Defense program to enhance supply chain traceability.

Northrop Grumman Corporation (NYSE: NOC) is leading a team to develop tiny chips to enable traceability. Spafford said DARPA is delving deeply into the whole realm of traceability, figuring out why autonomous systems sometimes make surprising decisions.

“One of the big problems is being able to go back and find out why did they do the thing it did,” Spafford said. “Even why it did something legitimately. Why did it treat the stop sign that way? It’s difficult to go back and trace that.”

The University of Washington recently did a fairly frightening experiment. It put pieces of tape on a stop sign, and convinced a self-driving car it was a speed sign. It was a low-tech way to seriously mess with the head of state-of-the-art autonomy.

“That suggests that if someone finds a way to hack in and do it, we may not be able to find out why it happened or who did it,” Spafford said. “I’m concerned about these things being done accurately.”

Holding Algorithms Accountable

Leave it to the legal community to already start divvying up how to work the liability angle. Right now, as usual, the law lags behind tectonic jumps in tech.

“Traditional tort law would say that the developer is not liable,” wrote Jeremy Elman, a partner at the global corporate legal giant DLA Piper and executive of the firm’s Intellectual Property and Technology and Emerging Growth practices. “That certainly will pose Terminator-like dangers if AI keeps proliferating with no responsibility. The law will need to adapt to this technological change in the near future.”

For its part, the computer scientists are sending warning signals.

To virtually no coverage by the media at all, the preeminent Association for Computer Machinery on Sept. 14 issued a demand for algorithmic accountability, in essence a warning that humans are hiding their blame for things gone wrong behind the machine-learning technology they are creating.

So while Elon Musk harps endlessly on about the threat of AI and warns that humans must incorporate cyborg components to compete, or flee to Mars, he is a partner in making the world of the auto-machine future.

And while the moral philosophers wonder about the dangers of AI, the militarized nations are racing to weaponize AI for the next generation of killing machines.

“There are people who don’t care about the dangers,” Spafford said.

Related Link: Why A Charming Synthezoid Is Coming Your Way



Get the latest news and updates from Stockhouse on social media

Follow STOCKHOUSE Today