After Parkland shooting, do we need social media background checks?

时间:2024-09-23 01:36:01 来源:American news

In 2014, Tashfeen Malik's family in Pakistan started to worry about her Facebook posts.

It was the religious extremism that concerned them, a family member told the Los Angeles Times. A year later, Malik and her husband, Syed Rizwan Farook, declared their allegiance to ISIS on social media. Then they killed 14 people in San Bernardino using semiautomatic rifles.

SEE ALSO:Teachers say #ArmMeWith classroom resources instead of guns

In the aftermath, 26 senators asked the Department of Homeland Security to conduct social media background checks when reviewing visa applications — something President Donald Trump approved last year.

Later, someone using the screen name "nikolas cruz" said he wanted to "shoot people" with his AR-15 in the comments under a YouTube video, according to CNN. He posed on Instagram with guns and knives, and wrote racist and xenophobic slurs. He was even reported to the FBI for writing, "Im going to be a professional school shooter" on YouTube.

Last month, using a legally purchased AR-15, Nikolas Cruz murdered 17 people at Marjory Stoneman Douglas High School in Parkland, Florida. He's only 19 years old.

No politicians called for social media background checks in the wake of the Parkland shooting.

In fact, Republican lawmakers actively held back progress on gun control. A week after the massacre, survivors from the shooting crowded the Florida state Capitol. Their presence didn't sway the Republican controlled-legislature, who voted down a motion to even consider an assault weapon ban.

The promise and peril of social media

If they’re willing to do it to stop terrorism, why don’t politicians want the federal government to comb through the social media feeds of gun buyers in an effort to predict — and stop — mass shootings?

The most obvious reasons are political. In the United States, almost everything is permitted in the fight against Islamic terrorism. Almost nothing is permitted in the fight against domestic gun violence. Other barriers exist, too.

"Using large-scale social media analytics for gun control would be very hard to do," said Jeff Asher, a crime analyst who formerly worked for the CIA and city of New Orleans.

The problem with broad social media sweeps, he noted, is that people don't have to tell the truth online. And even if they did, law enforcement would have to know what to look for -- what distinguishes a truly dangerous individual? Add in the thorny legal and ethical considerations, and the task of singling out shooters seems even more unlikely.

Mashable ImageIn this Monday, July 7, 2014, file photo, Chicago police display some of the nearly 3,400 illegal firearms the have confiscated so far this year in their battle against gun violence during a news conference in Chicago.Credit: AP/ /M. Spencer Green

Prospective gun buyers, like visa applicants, could be asked to list their social media handles. But not everyone in America uses social media. According to the Pew Research Center, 35 percent of men (and fatal shootings are mostly committed by men) don't use social media at all.

Mashable Light SpeedWant more out-of-this world tech, space and science stories?Sign up for Mashable's weekly Light Speed newsletter.By signing up you agree to our Terms of Use and Privacy Policy.Thanks for signing up!

Even if everyone used Twitter or Facebook, the sheer number of posts would be difficult to process. The Edward Snowden leak revealed that the NSA complained internally about having too much data to act on. The FBI would likely encounter the same problem, seeing as 25 million background checks were conducted last year.

After the San Bernardino shooting, two policy experts from the University of California at San Diego explained in the Washington Postwhy screening people via social media feeds is so difficult. Basically, words posted online don't often translate into action. As they put it:

[H]ate speech is commonplace, and political terrorism is exceedingly rare. Every time a keyword is used that does not lead to terrorist actions, it’s part of a vast amount of noise obscuring a miniscule signal ...

But the problem of ubiquitous false-positives remains. Thousands of angry people use the Internet to proudly declare their support for domestic terrorism every single day. Since everyone understands that most of it is just cheap talk, it is protected speech.

Vague threats of violence aren't enough to violate the First Amendment, the Supreme Court has ruled. You've got to be specific, calling for "imminent lawless action."

So, an Instagram post of a guy holding a gun with the caption, "I'm going to kill John Doe tomorrow at 10 a.m." might get flagged. Or it might not. They could be friends joking around. Maybe the guy is holding a BB gun. But what if someone ends up dead?

This is all complicated. Americans should rightly be wary of federal agents knocking down their doors over tweets. And yet, in the hindsight of tragedy, it's natural to look at a menacing comment and say, "Why didn't we stop this guy?"

Facebook and Google can ban you, but the government has a higher bar to clear. Law enforcement can investigate specific violent threats on social media. They can't deny you a gun because you've said angry, stupid things in general.

Instead of casting a wide net on social media, Asher said, police departments could use data they already collect -- such as arrest records and incident reports -- to analyze and visualize the social relationships of criminal suspects.

An Instagram post of a guy holding a gun with the caption, "I'm going to kill John Doe tomorrow at 10 a.m." might get flagged. Or it might not.

It's called social network analysis. At the core of the idea is the assumption that violence isn't random. Relationships can help predict where violence will spread. (The U.K. government even has a "how-to" guide for law enforcement officials.)

Of course, as anyone who's seen Minority Report can tell you, that carries its own ethical questions.

Algorithms have biases because the humans who design them are biased, and the statistics they rely on come from biased systems. ProPublica examined a computer program meant to predict repeat offenders and found that it wrongly flagged black defendants as future criminals twice as often as white defendants.

Still, you could focus on providing aid to those at risk, rather than policing them -- much like Columbia professor Desmond Patton is trying to do in Chicago. He's combining social media algorithms with people's knowledge of local neighborhoods to help identify troubled individuals, and, ideally, help them before they do something they might regret.

Even algorithms with good intentions are fraught, though. In her excellent new book Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, Virginia Eubanks points out how systems ostensibly designed to efficiently administer government aid can punish working class people.

Imagine a system that readily flags people in poor neighborhoods filled with gun violence. What happens to that data? Does it factor into whether someone qualifies for government aid?

So, yeah, using algorithms for gun control purposes is tricky.

Right now, however, there are things that are much easier from a technical standpoint to accomplish. Ban assault weapons and high-capacity magazines. Close loopholes that allow guns to be sold without background checks at gun shows and on the internet. Hire more people to process background checks. Don't let domestic abusers have guns. Make mental health services affordable and readily available, instead of blaming "mentally disturbed" shooters after trying to sabotage the Affordable Care Act.

In other words, the number one thing needed to protect Americans from guns isn't social media surveillance. It's for Republican lawmakers -- who take the most NRA cash and consistently vote against gun control measures -- to grow a spine.


Featured Video For You
These celebrities are getting behind March for Our Lives
推荐内容