According to Wired, more schools across the country are investing in machine learning solutions to serve as social media monitoring tools.
The goal of these technologies is to be able to detect and flag dangerous language, which end users can pick up on and act upon before a dangerous situation occurs; as a result, events such as the Parkland, Fla. shooting can be prevented.
For example, the Lakeview school district in Battle Creek, MI, utilizes a social media-detecting service called Firestorm, Wired says. Firestorm “helps schools develop safety and crisis response policies…[and] pitches its social media monitoring tool as being able to help schools prevent everything from sexting and bullying to mass shootings.” When Firestorm detects keywords that might hint at violence or a conflict, the district’s superintendent, Blake Prewitt, receives an email. The alerts help him keep tabs on and protect his 4,000 students and 500 staff.
“If someone posts something threatening to someone else, we can contact the families and work with the students before it gets to the point of a fight happening in school,” Prewitt told Wired.
Could playing it safe make decision makers sorry?
While decision makers, including superintendents, principals, and others, might be handed useful information from these machine learning solutions, a conflict of interest comes into play: “There is some debate over whether—or how—[it] that information can be accurately or ethically extracted by software,” Wired says.
Research has shown that it’s also tough for parents to interpret the culture and language their children embrace when walking into different online communications, and machine learning can make it even tougher in some cases. This “could be exacerbated by an algorithm that can’t possibly understand the context of what it was seeing,” Amanda Lenhart, a New America Foundation researcher who looks at teenagers’ internet use, told Wired.
As a result, decision makers interested in investing in similar machine learning solutions might consider the pros and cons the technology has to offer. If they do opt for one of these solutions, it might be a good idea for decision makers to establish a solid relationship with their solution provider, and familiarize themselves with the provider’s policies and workflows; for example, Wired says that Firestorm makes it clear to its customers that its only scans public posts, and targets topics and locations rather than individuals. Doing this can make sure decision makers are using the technology properly and are getting the greatest ROI, and that students don’t feel as if Big Brother is watching their social media trail.
Leave a Reply