Artificial intelligence can't solve online extremism issue, experts tell House panel

A group of experts on Tuesday warned a House panel that artificial intelligence is not capable of sweeping up the full breadth of online extremist content — in particular posts from white supremacists.

At a House Homeland Security subcommittee hearing, lawmakers cast doubt on claims from top tech companies that artificial intelligence, or AI, will one day be able to detect and take down terrorist and extremist content without any human moderation.

ADVERTISEMENT

Rep. Max RoseMax RoseGOP hopes dim on reclaiming House Republicans raise concerns over House campaign arm leadership Freshman Democrats call on McConnell to hold vote on election reform bill MORE (D-N.Y.), the chairman of counterterrorism subcommittee holding the hearing, said he is fed up with responses from companies like Google, Twitter and Facebook about their failure to take down extremist posts and profiles, calling it “wanton disregard for national security obligations.”

“We are hearing the same thing from social media companies, and that is, ‘AI’s got this, it’s only gonna get better,’ ” Rose said during his opening remarks. “Nonetheless … we have seen egregious problems.” 

“We’ve been looking at this problem for months now,” he continued. “We’ve been approached by the social media companies with this libertarian, technocratic elitism that’s highly, highly disturbing and it centers around the claim that AI can accomplish everything.”

The lineup of experts, including Facebook’s former chief security officer and current Stanford academic Alex Stamos, agreed that AI is not ready to take on the complicated issues of terrorist content — and raised questions over whether it ever will be able to.

Stamos said the “world’s best machine learning resembles a crowd of millions of preschoolers.” 

“No number of preschoolers could get together to build the Taj Mahal,” he explained.  

He also raised concerns about the variety of fringe platforms, such as 8chan and Gab, that seek to host white supremacist groups and ideologies.

These white supremacist groups have online hosts who are happy to host them,” Stamos said. “That is not true for the Islamic state.”

The House Homeland Security Committee has kicked its investigation of online extremist content into high gear over the past several months, following the livestreamed and viral mass shooting of worshippers in a Christchurch, New Zealand, mosque.

The incident, which left platforms scrambling to take down millions of copies of the video, has sparked questions from lawmakers over how seriously the platforms treat acts of white supremacy.

Representatives with Facebook, Google and Twitter are slated to testify before the full House Homeland Security Committee on Wednesday about their efforts to counter online terrorist content and misinformation.

Top tech companies, including Facebook, have claimed that their AI systems are already successfully detecting a huge swath of terrorist and extremist content. But experts at the hearing said those claims are often overblown.

“Context is vitally important and context can often be hard for algorithms to detect,” Ben Buchanan, an assistant teaching professor at Georgetown University, said.

It is often difficult for artificial intelligence systems to distinguish between educational videos about atrocities and content that is promoting those acts of violence.

After the hearing on Tuesday, House Homeland Security Chairman Bennie ThompsonBennie Gordon ThompsonDemocrats make U-turn on calling border a ‘manufactured crisis’ Hillicon Valley: Facebook unveils new cryptocurrency | Waters wants company to halt plans | Democrats look to force votes on election security | Advertisers partner with tech giants on ‘digital safety’ | House GOP unveils cyber agenda Top Democrats question legal basis for appointing Cuccinelli as temporary immigration chief MORE (D-Miss.) told reporters that the committee is currently in the exploratory stage and is not working on any legislative proposals.

“After we conduct all of our oversight, if the companies demonstrate that without governmental regulation, they can do this, then I would say that there’s no need,” Thompson said. “But we’re still in the informational stage of seeing whether or not that is, in fact, the case.”

Click Here: cheap kanken backpack