Join today and have your say! It’s FREE!

Become a member today, It's free!

We will not release or resell your information to third parties without your permission.
Please Try Again
{{ error }}
By providing my email, I consent to receiving investment related electronic messages from Stockhouse.

or

Sign In

Please Try Again
{{ error }}
Password Hint : {{passwordHint}}
Forgot Password?

or

Please Try Again {{ error }}

Send my password

SUCCESS
An email was sent with password retrieval instructions. Please go to the link in the email message to retrieve your password.

Become a member today, It's free!

We will not release or resell your information to third parties without your permission.

Datametrex AI Ltd V.DM

Alternate Symbol(s):  DTMXF

Datametrex AI Limited is a technology-focused company with exposure to artificial intelligence, healthcare, and mobile gaming. It is focused on collecting, analyzing and presenting structured and unstructured data using machine learning and artificial intelligence. The Company's products include AnalyticsGPT, Cyber Security, and Healthcare. AnalyticsGPT platform scans vast data streams from social media, news, blogs, forums, messengers, enterprise data, and the dark Web, creating predictive analytics. Cyber Security is a deep analytics platform that captures, structures, and visualizes vast amounts of unstructured social media data, which is used as a discovery tool that allows organizations to make decisions. It offers Nexa Products, which consists of NexaSecurity and NexaSMART. Healthcare consists of Imagine Health Centres, a multidisciplinary healthcare facility, and Medi-Call, a telehealth platform. The Company also offers a mobile blockchain game, Cereal Crunch.


TSXV:DM - Post by User

Post by Oden6570on May 01, 2022 3:33am
325 Views
Post# 34645618

AI a scapegoat for democracy’s decline

AI a scapegoat for democracy’s decline

AI a scapegoat for democracy’s decline

 

On the eve of another election at home — June 2 is voting day in Ontario — spare a thought for artificial intelligence (AI) as it insinuates itself into democracies here and around the world.

When Twitter causes people to lose their minds, mind-controlling algorithms are blamed for serving up extremist tweets.

When Facebook is faulted for election losses, Russian AI is blamed for distorting the democratic discourse.

Will all those algorithms, which feast on fear and loathing, keep polarizing our politics and undermining democracies?

Those are the questions we grappled with at a conference on the risks — and realities — of AI at the Munk School of Global Affairs & Public Policy. The first thing you learn about machine learning, the dominant form of AI, is not to let its daunting complexity scare the wits out of you.

AI is not rocket science, nor is it reinventing political science. It’s a data tool that can be used and abused, depending on how humans — more precisely, politicians — handle its powers.

The panel I moderated — on AI and the future of democracy — yielded predictions that were slightly terrifying but also surprisingly reassuring (full disclosure: I’m a Senior Fellow at Munk).

Our guru on artificial intelligence — Henry Farrell, a political scientist at Johns Hopkins School of Advanced International Studies — shared his research on how it interacts with human and political intelligence (he also runs the “Monkey Cage” blog on democracy at the Washington Post).

Farrell addressed head-on the proposition that AI is weakening the West while strengthening the rest. The fear is that algorithms are flooding democracies with “destructive nonsense, while nondemocratic regimes are stabilized by the combination of machine learning and surveillance.”

There are growing predictions that a future AI autocracy like China might “beat democracy at its own game” and supplant our system of government. But his research suggests that dictators who depend on AI tools for surveillance and suppression are sowing the seeds of their own destruction through isolation. By repressing individual expression and reinforcing their own top-down styles of governance, they are flying blind — without the benefit of crowd-sourcing, whether at street level or online.

Yet democracies still face undeniable challenges in dealing with the rapid-fire iterations and gyrations of machine learning; no one disputes that AI can codify and amplify our worst impulses.

The algorithms underpinning Twitter and Facebook are quantitative but also predictive — they analyze large numbers of past decisions in order to anticipate or influence future impulses. Amazon’s website makes money by recognizing patterns in your purchasing history to recommend books or boots that it believes you’ll buy next; TikTok studies your fondness for cat videos (or the dance moves of NDP Leader Jagmeet Singh) before cueing up more of the same to keep you engaged for advertisers.

In fact, political operators have been exploiting similar marketing research tools for decades — relying on the statistical power of public opinion polling, the black magic of focus groups and the dark arts of attack ads. More recently, censuses and other valuable databases have been harvested and mined, sliced and diced, to micro-target sub-demographic groups with eerie quantitative accuracy and predictability but no accountability.

That’s a long way from just licking envelopes, and it came long before AI. Algorithms merely ramp up the old goals of political combat with new weapons.

The best evidence suggests that these platforms — from embryonic search engines to early social media — were well on the way to coarsening social discourse even before they armed themselves with algorithms that prioritized controversy over chronology in their online feeds (the old Facebook published your posts in sequence; the newer algorithms rank content by the most “likes” and “shares” and sort them at the top of your feed). All of which suggests that the core, preexisting problem is how social media brings out our unfiltered, uninhibited inner selves — rather than merely the AI technology that relies on retweets for rankings.

The possibility that AI techniques can “reshape people’s political opinions” remains unproven: “It is really, really hard to persuade people of things that they don’t want to believe in the first place,” Farrell noted.

Rather than simply scapegoating AI for the decline of democracy, we also need to look at already ingrained tendencies that were undermining our discourse for decades.

“If we did not have machine learning, if we were back in the … mid-1990s, we probably would not be in a world that was very different,” Farrell argued. “It would probably be happening anyway.”

Which reminds me of how opponents of gun control like to argue, seductively, that guns don’t kill people, people kill people. By analogy, AI doesn’t make people mad, people make people mad (mad as in angry but also crazy).

The point is that machine learning isn’t entirely innocent — it’s more of an accessory after the fact: By weaponizing murmurs of disagreement into mass discord, AI is turning knife fights into gunfights.

My bigger concern is that the mass media keep publicizing social media, thereby amplifying its algorithms to even broader audiences. Big newspapers obsessively shine a spotlight in the dark recesses of Twitter — a platform where comparatively few real people (beyond bots) spend much time tweeting.

And then there’s American television. For all the attention focused on secret Twitter algorithms — the black box of machine learning — the open manipulation of U.S. public opinion by Fox News takes place in plain sight.

So who is the bigger influencer (or manipulator) — TV or AI?

Farrell’s answer is that it’s a symbiotic relationship. Fox harvests extremist tweets from outliers on Twitter — trolling the trolls — in order to showcase them on broadcasts, acting like a “conveyer belt” from social media to mass media.

That means AI is not so much exercising mind control as it is influencing how the body politic sees itself. By holding up a distorted “mirror” that reflects mostly what Twitter trolls are mouthing off about, rather than what society is quietly thinking about, the algorithms dumb down our discourse.

That said, much of the anger captured by social media can be real, even if virtual. It shows up online in real-time, rather than just at election time; and even if people are digging deeper into echo chambers and rabbit holes, their grievances may still have real roots.

“Internal faults within democracy are, perhaps, exacerbated by social media, but have more fundamental causes,” Farrell concludes. “People get angry when they don’t see opportunities for their kids anymore.”

A timely message for the province’s politicians as they prepare to hit the campaign trail: Don’t blame the algorithm for the anger.

But here’s another pitfall of polarization. People may start giving up on elections if they think too many heads are in the sand — or stuck in a rabbit hole:

“If you think that your fellow citizens are brain-controlled zombies who are impervious to argument, you’re not going to be that interested in democracy,” Farrell mused.

Turns out the enduring danger to democratic idealism is not an algorithm but cynicism. Turn down the noise, but don’t tune out elections.


<< Previous
Bullboard Posts
Next >>